Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Apparent race condition causes pods to mount secret volumes with no object content #1051

Open
dindurthy opened this issue Sep 15, 2022 · 11 comments
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@dindurthy
Copy link

What steps did you take and what happened:
Apparent race condition causes pods to mount secret volumes with no object content

What did you expect to happen:
We expect a pod to start only if a secret volume mounts successfully with the object content

Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
Here's a stab at picking out the relevant logs

secrets-store: "I0914 17:56:45.942699       1 nodeserver.go:353] \"Using gRPC client\" provider=\"aws\" pod=\"archive-importer-gcp-usa-central-1-transmission-7d4665cf4-gf7hq\"",
csi-aws      : "I0914 17:57:58.141005       1 server.go:107] Servicing mount request for pod archive-importer-gcp-usa-central-1-transmission-7d4665cf4-gf7hq in namespace services using service account archive-importer-default with region us-east-1",
secrets-store: "E0914 17:58:45.941230       1 nodeserver.go:235] \"failed to mount secrets store object content\" err=\"rpc error: code = DeadlineExceeded desc = context deadline exceeded\" pod=\"services/archive-importer-gcp-usa-central-1-transmission-7d4665cf4-gf7hq\"",
secrets-store: "I0914 17:58:45.941275       1 nodeserver.go:81] \"unmounting target path as node publish volume failed\" targetPath=\"/var/lib/kubelet/pods/5890fd63-ad08-4b0f-9c44-64de925d9119/volumes/kubernetes.io~csi/archive-importer-shared-secrets/mount\" pod=\"services/archive-importer-gcp-usa-central-1-transmission-7d4665cf4-gf7hq\""
secrets-store: "I0914 17:58:46.634611       1 utils.go:47] \"already mounted to target\" targetPath=\"/var/lib/kubelet/pods/5890fd63-ad08-4b0f-9c44-64de925d9119/volumes/kubernetes.io~csi/archive-importer-gcp-usa-central-1/mount\""
secrets-store: "I0914 17:58:46.634645       1 nodeserver.go:133] \"target path is already mounted\" targetPath=\"/var/lib/kubelet/pods/5890fd63-ad08-4b0f-9c44-64de925d9119/volumes/kubernetes.io~csi/archive-importer-gcp-usa-central-1/mount\" pod=\"services/archive-importer-gcp-usa-central-1-transmission-7d4665cf4-gf7hq\""

The kubelet logs suggest that the mount attempt for secrets for this pod initiates, errors, retries, and then succeeds. Here's an example:

"I0914 17:56:45.831962    2630 reconciler.go:225] \"operationExecutor.VerifyControllerAttachedVolume started for volume \\\"archive-importer-shared-secrets\\\" (UniqueName: \\\"kubernetes.io/csi/5890fd63-ad08-4b0f-9c44-64de925d9119-archive-importer-shared-secrets\\\") pod \\\"archive-importer-gcp-usa-central-1-transmission-7d4665cf4-gf7hq\\\" (UID: \\\"5890fd63-ad08-4b0f-9c44-64de925d9119\\\") \""
"I0914 17:56:45.937244    2630 reconciler.go:266] \"operationExecutor.MountVolume started for volume \\\"archive-importer-shared-secrets\\\" (UniqueName: \\\"kubernetes.io/csi/5890fd63-ad08-4b0f-9c44-64de925d9119-archive-importer-shared-secrets\\\") pod \\\"archive-importer-gcp-usa-central-1-transmission-7d4665cf4-gf7hq\\\" (UID: \\\"5890fd63-ad08-4b0f-9c44-64de925d9119\\\") \""
"E0914 17:58:45.940859    2630 nestedpendingoperations.go:335] Operation for \"{volumeName:kubernetes.io/csi/5890fd63-ad08-4b0f-9c44-64de925d9119-archive-importer-shared-secrets podName:5890fd63-ad08-4b0f-9c44-64de925d9119 nodeName:}\" failed. No retries permitted until 2022-09-14 17:58:46.440837359 +0000 UTC m=+1221.338406280 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume \"archive-importer-shared-secrets\" (UniqueName: \"kubernetes.io/csi/5890fd63-ad08-4b0f-9c44-64de925d9119-archive-importer-shared-secrets\") pod \"archive-importer-gcp-usa-central-1-transmission-7d4665cf4-gf7hq\" (UID: \"5890fd63-ad08-4b0f-9c44-64de925d9119\") : rpc error: code = DeadlineExceeded desc = context deadline exceeded"
"I0914 17:58:46.493883    2630 reconciler.go:266] \"operationExecutor.MountVolume started for volume \\\"archive-importer-shared-secrets\\\" (UniqueName: \\\"kubernetes.io/csi/5890fd63-ad08-4b0f-9c44-64de925d9119-archive-importer-shared-secrets\\\") pod \\\"archive-importer-gcp-usa-central-1-transmission-7d4665cf4-gf7hq\\\" (UID: \\\"5890fd63-ad08-4b0f-9c44-64de925d9119\\\") \""
"I0914 17:58:46.642319    2630 operation_generator.go:714] MountVolume.SetUp succeeded for volume \"archive-importer-shared-secrets\" (UniqueName: \"kubernetes.io/csi/5890fd63-ad08-4b0f-9c44-64de925d9119-archive-importer-shared-secrets\") pod \"archive-importer-gcp-usa-central-1-transmission-7d4665cf4-gf7hq\" (UID: \"5890fd63-ad08-4b0f-9c44-64de925d9119\")"

This happens very sporadically when we are scheduling many pods at once on a node, so I am sure resource contention on the underlying node is at play.

Which provider are you using:
We are using the csi-secrets-store-provider-aws.

Environment:

  • Secrets Store CSI Driver version: (use the image tag): v1.1.0
  • Kubernetes version: (use kubectl version): Server Version: version.Info{Major:"1", Minor:"22+", GitVersion:"v1.22.11-eks-18ef993", GitCommit:"b9628d6d3867ffd84c704af0befd31c7451cdc37", GitTreeState:"clean", BuildDate:"2022-07-06T18:06:23Z", GoVersion:"go1.16.15", Compiler:"gc", Platform:"linux/amd64"}
@dindurthy dindurthy added the kind/bug Categorizes issue or PR as related to a bug. label Sep 15, 2022
@dindurthy
Copy link
Author

Happy to try upgrading if that might help, but I didn't see any issues suggesting it would.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 14, 2022
@FirelightFlagboy
Copy link

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 15, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 15, 2023
@FirelightFlagboy
Copy link

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 15, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 13, 2023
@FirelightFlagboy
Copy link

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 13, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 22, 2024
@FirelightFlagboy
Copy link

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 22, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 21, 2024
@FirelightFlagboy
Copy link

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

4 participants