Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add E2E for volumegroupsnapshot for RBD namespace #5084

Open
Madhu-1 opened this issue Jan 16, 2025 · 4 comments
Open

Add E2E for volumegroupsnapshot for RBD namespace #5084

Madhu-1 opened this issue Jan 16, 2025 · 4 comments
Assignees
Labels
component/rbd Issues related to RBD component/testing Additional test cases or CI work

Comments

@Madhu-1
Copy link
Collaborator

Madhu-1 commented Jan 16, 2025

Currently, we have only e2e test to verify that volumegroupsnapshot works with a pool with an implicit namespace, we should also add a test case to ensure that VGS works with other rbd namespaces too.

@Madhu-1 Madhu-1 added component/rbd Issues related to RBD component/testing Additional test cases or CI work labels Jan 16, 2025
@OdedViner
Copy link
Contributor

/assign

Copy link

Thanks for taking this issue! Let us know if you have any questions!

@OdedViner
Copy link
Contributor

OdedViner commented Jan 26, 2025

Hi @Madhu-1 ,

I want to understand the process from scratch, so I ran it manually.
However, I encountered an issue while trying to install the CRD for VolumeGroupSnapshotClass.
I attempted to use this file, but it is not working:
https://github.com/ceph/ceph-csi/blob/devel/charts/ceph-csi-rbd/templates/csidriver-crd.yaml

cat <<EOF | oc create -f -
apiVersion: storage.k8s.io/v1
kind: CSIDriver
metadata:
  name: rbd.csi.ceph.com # Replace with your CSI driver name if different
  labels:
    app: ceph-csi-rbd # Replace with your application name
    chart: ceph-csi-rbd-1.0.0 # Replace with your chart version or application version
    release: ceph-csi-release # Replace with your release name
    heritage: Kubernetes # Indicate the system managing this resource
spec:
  attachRequired: true # Indicates the driver needs an attach operation
  podInfoOnMount: false # Pod info is not required during mount
  fsGroupPolicy: ReadWriteOnceWithFSType # Specify the policy (e.g., None or ReadWriteOnceWithFSType)
  seLinuxMount: true # Enable SELinux context mounts
EOF

My Procedure:
1.Create pool-test.yaml

cat <<EOF | oc create -f -
apiVersion: ceph.rook.io/v1
kind: CephBlockPool
metadata:
  name: replicapool
  namespace: rook-ceph
spec:
  failureDomain: osd
  replicated:
    size: 1
EOF

wait_for_cephblockpool_ready_state:
$ kubectl get CephBlockPool replicapool -n rook-ceph
NAME          PHASE   TYPE         FAILUREDOMAIN   AGE
replicapool   Ready   Replicated   osd             23s

2.Create CephBlockPoolRadosNamespace:

cat <<EOF | oc create -f -
apiVersion: ceph.rook.io/v1
kind: CephBlockPoolRadosNamespace
metadata:
  name: namespace-a
  namespace: rook-ceph # namespace:cluster
spec:
  # The name of the RADOS namespace. If not set, the default is the name of the CR.
  # name: namespace-a
  # blockPoolName is the name of the CephBlockPool CR where the namespace will be created.
  blockPoolName: replicapool
EOF

wait_for_cephblockpoolradosnamespace_ready_state:
$ kubectl get CephBlockPoolRadosNamespace namespace-a -n rook-ceph
NAME          PHASE   BLOCKPOOL     AGE
namespace-a   Ready   replicapool   24s

3.Get cluster_id

$ kubectl -n rook-ceph get cephblockpoolradosnamespace/namespace-a -o jsonpath='{.status.info.clusterID}'
80fc4f4bacc064be641633e6ed25ba7e

4.Create storageclass with relevant cluster_id

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: rook-ceph-block
provisioner: rook-ceph.rbd.csi.ceph.com # csi-provisioner-name
parameters:
  # clusterID is the namespace where the rook cluster is running
  # If you change this namespace, also change the namespace below where the secret namespaces are defined
  clusterID: 80fc4f4bacc064be641633e6ed25ba7e

  # If you want to use erasure coded pool with RBD, you need to create
  # two pools. one erasure coded and one replicated.
  # You need to specify the replicated pool here in the `pool` parameter, it is
  # used for the metadata of the images.
  # The erasure coded pool must be set as the `dataPool` parameter below.
  #dataPool: ec-data-pool
  pool: replicapool

  # RBD image format. Defaults to "2".
  imageFormat: "2"

  # RBD image features. Available for imageFormat: "2". CSI RBD currently supports only `layering` feature.
  imageFeatures: layering

  # The secrets contain Ceph admin credentials. These are generated automatically by the operator
  # in the same namespace as the cluster.
  csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner
  csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph # namespace:cluster
  csi.storage.k8s.io/controller-expand-secret-name: rook-csi-rbd-provisioner
  csi.storage.k8s.io/controller-expand-secret-namespace: rook-ceph # namespace:cluster
  csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node
  csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph # namespace:cluster
  # Specify the filesystem type of the volume. If not specified, csi-provisioner
  # will set default as `ext4`.
  csi.storage.k8s.io/fstype: ext4
# uncomment the following to use rbd-nbd as mounter on supported nodes
#mounter: rbd-nbd
allowVolumeExpansion: true
reclaimPolicy: Delete

5.Create PVC:

cat <<EOF | oc create -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: rbd-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: rook-ceph-block
EOF

6.Verify pvc in bound state:

$ kubectl get pvc
NAME      STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      VOLUMEATTRIBUTESCLASS   AGE
rbd-pvc   Bound    pvc-763a08e3-30fd-4cd7-b261-3691c37d3633   1Gi        RWO            rook-ceph-block   <unset>                 4s

7.Install the Snapshot CRDs:

kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/release-5.0/client/config/crd/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/release-5.0/client/config/crd/snapshot.storage.k8s.io_volumesnapshots.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/release-5.0/client/config/crd/snapshot.storage.k8s.io_volumesnapshotcontents.yaml

$ kubectl get crd | grep snapshot.storage.k8s.io
volumesnapshotclasses.snapshot.storage.k8s.io    2025-01-26T09:56:43Z
volumesnapshotcontents.snapshot.storage.k8s.io   2025-01-26T09:56:44Z
volumesnapshots.snapshot.storage.k8s.io          2025-01-26T09:56:44Z

8.Create VolumeGroupSnapshotClass:

$ cat <<EOF | kubectl create -f -
apiVersion: groupsnapshot.storage.k8s.io/v1beta1
kind: VolumeGroupSnapshotClass
metadata:
  name: csi-rbdplugin-groupsnapclass
driver: rbd.csi.ceph.com
parameters:
  # String representing a Ceph cluster to provision storage from.
  # Should be unique across all Ceph clusters in use for provisioning,
  # cannot be greater than 36 bytes in length, and should remain immutable for
  # the lifetime of the StorageClass in use
  clusterID: 80fc4f4bacc064be641633e6ed25ba7e

  # eg: pool: rbdpool
  pool: replicapool

  # (optional) Prefix to use for naming RBD groups.
  # If omitted, defaults to "csi-vol-group-".
  # volumeGroupNamePrefix: "foo-bar-"

  csi.storage.k8s.io/group-snapshotter-secret-name: csi-rbd-secret
  csi.storage.k8s.io/group-snapshotter-secret-namespace: default
deletionPolicy: Delete
EOF
error: resource mapping not found for name: "csi-rbdplugin-groupsnapclass" namespace: "" from "STDIN": no matches for kind "VolumeGroupSnapshotClass" in version "groupsnapshot.storage.k8s.io/v1beta1"
ensure CRDs are installed first

@Madhu-1
Copy link
Collaborator Author

Madhu-1 commented Jan 27, 2025

@OdedViner before creating the volumegroupsnapshotclass please install volumegroupsnapshot CRD's as same volume snapshot like you did in step 7

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component/rbd Issues related to RBD component/testing Additional test cases or CI work
Projects
None yet
Development

No branches or pull requests

2 participants