Replies: 3 comments 3 replies
-
Have you checked the etcd pod for restarts, or looked at the etcd pod logs? You might also consider looking at the apiserver pod logs? |
Beta Was this translation helpful? Give feedback.
1 reply
-
Moved etcd database to a dedicated NVME SSD and the issue is resolved. |
Beta Was this translation helpful? Give feedback.
0 replies
-
@kenho811 how did you move etcd database to a dedicated NVME SSD ? I am facing the same problem. Can u share the solution step by step? |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I installed rke2 on a debian bookworm VM.
I notice that every now and then, my kube-apiserver becomes unhealthy. I checked the kubernetes logs and notice this.
Apparently it is related to
etcd failure
, but I am not too sure how to debug this.Full log attached below.
kube-apiserver-rke2-master-prod.17d2eca167fb7aca.txt
=========
Observation
I observe that the warning event occurs mostly when new pods are being created.
I have an Airflow Scheduler which schedules new Kubernetes Pods every now and then.
Below is a screenshot of the events in my cluster.
When I suddenly schedule a lot of pods, the WARNING event occurs
Beta Was this translation helpful? Give feedback.
All reactions