-
Notifications
You must be signed in to change notification settings - Fork 301
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Apparent race condition causes pods to mount secret volumes with no object content #1051
Comments
Happy to try upgrading if that might help, but I didn't see any issues suggesting it would. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
Hit this again 2 years later when we made some changes that tweaked the density of concurrent secret mounts. Still seems bad to me. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
What steps did you take and what happened:
Apparent race condition causes pods to mount secret volumes with no object content
What did you expect to happen:
We expect a pod to start only if a secret volume mounts successfully with the object content
Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
Here's a stab at picking out the relevant logs
The kubelet logs suggest that the mount attempt for secrets for this pod initiates, errors, retries, and then succeeds. Here's an example:
This happens very sporadically when we are scheduling many pods at once on a node, so I am sure resource contention on the underlying node is at play.
Which provider are you using:
We are using the
csi-secrets-store-provider-aws
.Environment:
v1.1.0
kubectl version
):Server Version: version.Info{Major:"1", Minor:"22+", GitVersion:"v1.22.11-eks-18ef993", GitCommit:"b9628d6d3867ffd84c704af0befd31c7451cdc37", GitTreeState:"clean", BuildDate:"2022-07-06T18:06:23Z", GoVersion:"go1.16.15", Compiler:"gc", Platform:"linux/amd64"}
The text was updated successfully, but these errors were encountered: