-
Notifications
You must be signed in to change notification settings - Fork 181
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PVCs are "Bound" via dynamic provisioning -- MountVolume.MountDevice failed / not attached to node #2954
Comments
I just completed creating another cluster with vsphere CPI and CSI. All versions are within version skew limits. I am still getting a similar issue as I mentioned above. All PVCs bind, but I still get a mounting error. Here is the test pod events:
I suspected permission issues, so I gave admin privileges to the user. I don't know what else to try. Thanks in advance for the help. P.S.--Kubernetes version in new cluster is now 1.30.3. |
Updated: CSI Version 3.3.1 UPDATE: I am trying to figure out what to do next. I noticed in the Rancher cluster, the CSI has 3 controllers whereas my cluster only runs 1 controller. I tried running a couple more to mimic Rancher, however I believe I need to add 2 control-plane nodes for this to work (I tried already and the pods stay in a "pending" state). Which is an oddity because there are 3 running on Rancher and there is only one Control-Plane node in that cluster as well. Next steps: Going to add 2 control-plane nodes and run 2 more replicas of the CSI-controller for a total of 3. Will see if that has any effect. |
Still curious about this issue. I would like to get some information on how to troubleshoot or resolve this issue. Adding two more control planes and CSI controllers had zero effect on the issue. Thank you. |
@wiredcolony facing the same issue, have you found a solution? |
I have not. However, my issue is with vanilla K8s via Kubeadm. I also have a rancher server and the PV provisioning and mounting worked perfectly with the same vSphere setup. I tested it with RKE2 and the embedded rancher VMware CSI/CPI (not the rancher apps/helm CSI/CPI). Double check your vSphere permissions or try using vSphere admin username and password to easily troubleshoot permission issues. If admin account works, you know it is a permission issue. Also, make sure the username is the full username "[email protected]" when entering in Rancher. Hopefully someone from the CSI team takes the case. I would love to figure out what is going on. Let me know if you have any questions or get it working. Thanks! |
@wiredcolony great thanks for your patient reply, really learned a lot! My problem is solved and the root cause is that I didn't give my VM an option of besides, have you ever tried a hybrid cluster, which I mean there are some nodes that are not running on top of vSphere. Could these nodes be added into the k8s cluster and use vsphere-csi storage class. If could be, does it have some requirements on the network connectivity for those non-vSphere nodes. thanks again. |
Glad you got things up and running. disk.enableUUID=TRUE will definitely be an issue for mounting. All my nodes are apart of VMware. I thought maybe the issue was versioning, but the issue persists. Hopefully someone could give me some troubleshooting tips soon. |
I'm having the same problem with "MountVolume.MountDevice failed for volume "pvc-xx" : rpc error: code = NotFound desc = disk: xxx not attached to node" I have not figured out why yet. But found something may lead to the fix. testing restore the cluster with Velero, end up have to with restore cluster nodes with cluster nodes/VM backup. Saw the complained disk: xxx actually still attached to the original bad cluster nodes/VMs. I also used the same datastore for two different K8s clusters for testing. |
also saw the same issue here: https://www.reddit.com/r/kubernetes/comments/1e8v7nr/vsphere_cpicsi_error_pod_event/ |
The following fixed part of my problem: Ø kubectl describe pod : Ø kubectl describe volumeattachments csi-xxx
Referenced: https://knowledge.broadcom.com/external/article/327470/persistent-volumes-cannot-attach-to-a-ne.html |
kubectl delete volumeattachments csi-xxx did not work. The error comes back again. not sure why sometimes the volumes shows under datastore/Monitor/Container Volumes/ in vcenter, sometimes it disappear itself while I do nothing. |
My issue was solved by making sure VM advanced parameters has "disk.EnableUUID = True". Mine was set to "disk.EnabledUUID = True"(e.g. 'Enable' not 'Enabled'). Credit to reddit user PlexingtonSteel @ https://www.reddit.com/r/kubernetes/comments/1e8v7nr/comment/llbcsko/?context=3. |
Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug
What happened:
I am unable to get my PVCs to mount the respective pods. The volumes are present in "container volumes" in vSphere and PVCs & PVs are BOUND. Provisioning doesn't seem to be the issue. I have parameters "disk.EnableUUID = TRUE" & "ctkEnabled=FALSE". The storage class used in K8 is set to "default" and using a custom vSphere storage policy. Also, "VolumeExpansion=true.
What you expected to happen:
After PVC is bound, I expect the PV to mount the pod so pod/containers leave the "CreatingContainer" state.
How to reproduce it (as minimally and precisely as possible):
create storage class ->create PVC-->create pod/run helm chart requiring PVs
Anything else we need to know?:
vSphere files services are enabled.
Logs kubelet:
Logs CSI-Controller:
Logs CSI Node (I just noticed the errors here, I am not sure what it means):
Environment:
uname -a
): 5.15.0-116-genericThank you.
The text was updated successfully, but these errors were encountered: