Pods unable to reach kube-apiserver on same node #13722
Replies: 2 comments 1 reply
-
|
Some startup logs which I could not attach to the original message maybe they reveal something oblivious to me ... Logs |
Beta Was this translation helpful? Give feedback.
-
This is not a K3s thing; this is how kube-proxy works. The in-cluster apiserver endpoint load-balances traffic to all apiserver endpoints. Kube-proxy uses iptables rules to do this, and flannel masquerades traffic leaving the cluster overlay network. If there is something that for some reason blocks or is unable to route traffic between your pod network and some of the apiserver endpoints, this will break. I suspect it has to do with how you have set up the VIP; likely the kube-proxy and flannel NAT rules do not account for what you're trying to do with the virtual IP. I don't have a good suggestion other than not doing that. What exactly are you trying to accomplish here with your weird network configuration? |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Environmental Info:
K3s Version:
K3s Config-check
Node(s) CPU architecture, OS, and Version:
Cluster Configuration:
service-vipDescribe the bug:
Pods cannot reach the kube-apiserver on
10.43.0.1about every third call causing an i/o timeout. This is because k3s sets up loadbalancing when masquerading traffic to the kube-apiserver. Every time an i/o timeout occurs the node the pod is currently running on gets selected for handling the request. Using the IP of theservice-vipinside the pod to connect to the kube-apiserver results in the same i/o timeout.tcpdumpshows theSYNoncni0interface but there is never anACK.Steps To Reproduce:
config.yamlandconfig.yaml.dkubectl run --rm -it --image busybox testand inside try hitting the kube-apiserverwget -O - -S https://10.43.0.1:443Expected behavior:
Pods can connect to the kube-apiserver present on the node they are scheduled on.
Actual behavior:
Pods send out their connection request to the kube-apiserver present on the node they are scheduled on but the connection never gets accepted.
Additional context / logs:
There is probably something in my environment which I did not account for in my config but I have been at a loss for days what it could be.
firewalldis disabled and not running.ChatGPT suggested setting the sysctl
net.ipv4.conf.{all,default}.rp_filter=0but it did not help.I did take a look at #10010 but if I am not mistaking the iptables kernel modules are loaded.
Successful tcpdump
The Pod with IP
10.42.2.5hits a kube-apiserver not on its scheduled node.Failing tcpdump
The Pod with IP
10.42.2.5hits the kube-apiserver on its scheduled node.Beta Was this translation helpful? Give feedback.
All reactions