Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

etcd controller shall not ignore ghost etcd member in health check #11231

Open
mogliang opened this issue Nov 5, 2024 · 2 comments
Open

etcd controller shall not ignore ghost etcd member in health check #11231

mogliang opened this issue Nov 5, 2024 · 2 comments

Comments

@mogliang
Copy link

mogliang commented Nov 5, 2024

Environmental Info:
K3s Version:

Node(s) CPU architecture, OS, and Version:

Cluster Configuration:
k3s cluster with embeded etcd cluster

Describe the bug:
Currently, when there is a ghost etcd member (the member which don't belong to any k3s node), etcd controller choose to ignore it.

k3s/pkg/etcd/etcd.go

Lines 1115 to 1117 in 4adcdf8

if member.Name != name {
continue
}

This could be dangerous. Suppose there is a 3-controlplane nodes k3s cluster with embeded etcd, and 1 ghost member which may already offline (may caused by unclean node removal), then the cluster is actually in unhealthy state and cannot bear another cp node offline.

We are working on k3s cluster-api provider, when we do rollout update on k3s cluster, we encountered issue that node not removed cleanly and have etcd member left, and later it causes quorum loss in the middle of rollout.

Although it can be solved by leveraging annotation etcd.k3s.cattle.io/remove, it's better to have a 2nd guard to ensure no ghost etcd member eixists before doing any cluster managment operation (capi default controlplane implementation have this check). See previous discussion #9841

Steps To Reproduce:

  1. create 3 controlplane node k3s cluster
  2. shutdown 1 controlplane machine and delete that node by kubectl delete node

Expected behavior:
node EtcdIsVoter condition shall be false with message showing the ghost member

Actual behavior:
node EtcdIsVoter condition shows as true

Proposed fix
PullRequest #11232

Additional context / logs:

@brandond
Copy link
Member

brandond commented Nov 5, 2024

It is normal for a node to be added to etcd before the kubelet comes up and creates a Kubernetes node object. Are you accounting for this in your error state? This should be handled by member promotion, and on the other side, the finalizer on node objects should prevent nodes from being deleted without being removed from etcd.

Can you provide steps to reproduce this?

@mogliang
Copy link
Author

mogliang commented Nov 5, 2024

It is normal for a node to be added to etcd before the kubelet comes up and creates a Kubernetes node object. Are you accounting for this in your error state?

Good point, CAPI may consider this as abonormal state, and hold any mgmt operation until cluster to be stable (all etcd member mapped to node)

This should be handled by member promotion, and on the other side, the finalizer on node objects should prevent nodes from being deleted without being removed from etcd.
Can you provide steps to reproduce this?

It seems that the issue we were facing is due to stop VM before removing node, not etcd member leak. When we remove etcd first, and then stop VM and delete node, rollout seems to be ok.

Well, we do hope there is way for CAPI to know the etcd member list as another security check, but currently etcd controller doesn't surface such info.

Another idea is to expose the etcd member list as a node annotation, what do you think @brandond ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: New
Development

No branches or pull requests

2 participants