New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
apiserver received an error that is not an metav1.Status: &errors.errorString{s:"error dialing backend: tls: failed to verify certificate: x509: certificate is valid for 127.0.0.1, not xxx"} #10027
Comments
K3s doesn't generate any certificates that are valid for only the loopback address, but not any other IPs. I also see that you've set the egress-selector mode to disabled; why? Do you perhaps have a http proxy configured in your environment? I'm not sure what exactly the apiserver is talking to here that has this invalid certificate but I don't think it's an internal component. |
the reason of disable egress-selector is from another issue : #5897 |
i did a testing, use apiVersion: v1
clusters:
- cluster:
certificate-authority-data: xxx
server: https://127.0.0.1:6443
name: default
contexts:
- context:
cluster: default
user: default
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: default
user:
client-certificate-data: xxx
client-key-data: xxx |
another phenomenon: operation only does not work in the node which |
Environmental Info:
K3s Version: 1.25.16+k3s4
k3s version v1.25.16+k3s4 (ddda247)
go version go1.20.10
Node(s) CPU architecture, OS, and Version: CentOS Linux 7 (Core) 5.4.211-1.el7.elrepo.x86_64
Cluster Configuration: 1 servers, 5 agents
Describe the bug: when i use kubectl command to excute pod operation, i got the error like this :
tls: failed to verify certificate: x509: certificate is valid for 127.0.0.1, not 10.1.4.13
, and k3s log showsapiserver received an error that is not an metav1.Status: &errors.errorString{s:"error dialing backend: tls: failed to verify certificate: x509: certificate is valid for 127.0.0.1, not 10.1.4.13"}
Steps To Reproduce:
this is the daemon configfile
/etc/rancher/k3s/config.yaml
, but also in the/etc/systemd/system/multi-user.target.wants/k3s.service
, update it to 10.1.4.13, then regenerated thesecrets/k3s-serving
, but all of them didn't work10.1.4.13
this machine can not operated by kubectlExpected behavior:
Actual behavior:
Additional context / logs:
The text was updated successfully, but these errors were encountered: