-
Notifications
You must be signed in to change notification settings - Fork 557
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
data pool for metadata pool isn't found #5103
Comments
This is a debug message, and it's formatting looks broken:
It comes from this line: ceph-csi/internal/rbd/rbd_util.go Line 1606 in 935027f
There is a That also means that setting the data-pool did not fail, as the debug log message is only written at the end of the function, in case no failures occurred. The real problem seems to be this:
Which happens at the time of the image creation. ceph-csi/internal/rbd/rbd_util.go Lines 456 to 459 in 15ffa48
It is not clear which image option could be invalid. The
|
When a `dataPool` is passed while creating a volume, there is a `%!s(MISSING)` piece added to a debug log message. By using `fmt.Sprintf()` instead of concatinating the string, this should be gone now. Updates: ceph#5103 Signed-off-by: Niels de Vos <[email protected]>
When a `dataPool` is passed while creating a volume, there is a `%!s(MISSING)` piece added to a debug log message. When concatinating strings, the `%s` formatter is not needed. Updates: ceph#5103 Signed-off-by: Niels de Vos <[email protected]>
When a `dataPool` is passed while creating a volume, there is a `%!s(MISSING)` piece added to a debug log message. When concatinating strings, the `%s` formatter is not needed. Updates: #5103 Signed-off-by: Niels de Vos <[email protected]>
Describe the bug
Creating a PVC using a StorageClass with different data & metadata pools fails.
The two pools are there:
The storage class:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: csi-rbd-sc
annotations:
storageclass.kubernetes.io/is-default-class: 'false'
provisioner: rbd.csi.ceph.com
parameters:
pool: my-rbd-repl
dataPool: my-rbd
clusterID: ....
volumeNamePrefix: my-vol-
imageFeatures: layering
imageFormat: "2"
csi.storage.k8s.io/fstype: ext4
csi.storage.k8s.io/provisioner-secret-namespace: default
csi.storage.k8s.io/provisioner-secret-name: csi-rbd-secret
csi.storage.k8s.io/node-stage-secret-namespace: default
csi.storage.k8s.io/node-stage-secret-name: csi-rbd-secret
csi.storage.k8s.io/controller-expand-secret-namespace: default
csi.storage.k8s.io/controller-expand-secret-name: csi-rbd-secret
volumeBindingMode: Immediate
reclaimPolicy: Delete
allowVolumeExpansion: true
mountOptions:
- discard
Environment details
fuse
orkernel
. for rbd itskrbd
orrbd-nbd
) :Steps to reproduce
Steps to reproduce the behavior:
Actual results
I get an error implying that ceph-csi can't find the data pool.
Expected behavior
For ceph-csi to use my-rbd-repl for metadata and my-rbd for data
Logs
If the issue is in PVC creation, deletion, cloning please attach complete logs
of below containers.
This is from the provisioner that's doing the work:
The text was updated successfully, but these errors were encountered: