-
Notifications
You must be signed in to change notification settings - Fork 43
Open
Description
Hello.
I'm on K8S 1.29 (from a RKE2 cluster), and running kube-vip as control plane only HA, and seeing a lot of pod restarts that translate of multiple access loss with the API, which is kind of annoying.
The deployment is rather simple. Values are:
image:
tag: v0.7.1
env:
vip_interface: bond0
vip_leaderelection: true
svc_enable: false
cp_enable: true
vip_leaseduration: 5
vip_renewdeadline: 3
vip_retryperiod: 1
[...]
DaemonSet is running, obviously, on control nodes only. The pods look like:
kube-vip-g7d2z 1/1 Running 9 (2m26s ago) 66m
kube-vip-h5vqt 1/1 Running 6 (10m ago) 66m
kube-vip-nkhpr 1/1 Running 5 (2m13s ago) 66m
Logs showing:
level=fatal msg="lost leadership, restarting kube-vip"
Any clue?
Metadata
Metadata
Assignees
Labels
No labels