You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What happened:
When using a shoot cluster with provider OpenStack, configured with Manila enabled, and a single worker group within a specific zone a I created a standard pvc using the csi-manila-nfs storage class.
The created pod worked correctly and the PV was created:
$ k get pods
NAME READY STATUS RESTARTS AGE
nginx-jkcw2 1/1 Running 0 4m
$ k get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE
pv-shoot--test--os-test-d08f1b69-64a1-42e7-89e4-b6970f308b29 9Gi RWO Delete Bound default/pvc default <unset> 4m
After adding a new worker group with a new zone b a new pod is created from the DaemonSet on the new zone, however the pod is stuck in Pending as the PV has nodeAffinity only to the firstly created zone
$ k get pods
NAME READY STATUS RESTARTS AGE
nginx-jkcw2 1/1 Running 0 14m
nginx-nqz7r 0/1 Pending 0 9m22s
$ k describe pod nginx-nqz7r
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 12m default-scheduler 0/2 nodes are available: 1 node(s) had volume node affinity conflict. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.
Warning FailedScheduling 11m (x2 over 11m) default-scheduler 0/2 nodes are available: 1 node(s) had volume node affinity conflict. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.
Warning FailedScheduling 98s default-scheduler 0/2 nodes are available: 1 node(s) had volume node affinity conflict. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.
$ k get pv -o yaml
apiVersion: v1
items:
- apiVersion: v1
kind: PersistentVolume
...
spec:
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: topology.cinder.csi.openstack.org/zone
operator: In
values:
- a
What you expected to happen:
When using PVC connected to a DaemonSet the created PV to be updated/recreated with all zones when a new worker group with an unused zone is added.
How to reproduce it (as minimally and precisely as possible):
See above
Anything else we need to know?:
Environment:
Gardener version (if relevant):
Extension version:
Kubernetes version (use kubectl version):
Cloud provider or hardware configuration:
Others:
The text was updated successfully, but these errors were encountered:
AleksandarSavchev
changed the title
Persistent Volumes are not update/recreated when new zones are added to cluster
Persistent Volumes of Malina PVC are not update/recreated when new zones are added to cluster
Jan 17, 2025
AleksandarSavchev
changed the title
Persistent Volumes of Malina PVC are not update/recreated when new zones are added to cluster
Persistent Volumes of Manila PVC are not update/recreated when new zones are added to cluster
Jan 17, 2025
AleksandarSavchev
changed the title
Persistent Volumes of Manila PVC are not update/recreated when new zones are added to cluster
Persistent Volumes of NFS based PVCs are not update/recreated when new zones are added to cluster
Jan 17, 2025
How to categorize this issue?
/area storage
/area scalability
/kind bug
/platform openstack
What happened:
When using a shoot cluster with provider OpenStack, configured with Manila enabled, and a single worker group within a specific zone
a
I created a standard pvc using thecsi-manila-nfs
storage class.Created a nginx
DaemonSet
that uses thePVC
The created pod worked correctly and the PV was created:
After adding a new worker group with a new zone
b
a new pod is created from theDaemonSet
on the new zone, however the pod is stuck inPending
as the PV has nodeAffinity only to the firstly created zoneWhat you expected to happen:
When using PVC connected to a DaemonSet the created PV to be updated/recreated with all zones when a new worker group with an unused zone is added.
How to reproduce it (as minimally and precisely as possible):
See above
Anything else we need to know?:
Environment:
kubectl version
):The text was updated successfully, but these errors were encountered: