Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Storage not created #1044

Open
TristisOris opened this issue Feb 6, 2024 · 2 comments
Open

[Bug]: Storage not created #1044

TristisOris opened this issue Feb 6, 2024 · 2 comments
Labels
on-user pending on user

Comments

@TristisOris
Copy link

Describe the bug

apiVersion: kadalu-operator.storage/v1alpha1
kind: KadaluStorage
metadata:
  name: storage-pool-1
spec:
  type: Replica3
  storage:
    - node: node3
      path: /mnt/test
    - node: node4
      path: /mnt/test
    - node: node5
      path: /mnt/test

after apply this config, pods are created, but not storage. So at next step pv can't allocate any space.

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pv1
spec:
  storageClassName: kadalu.storage-pool-1
  accessModes:
    - ReadWriteMany # ReadWriteOnce ReadWriteMany
  resources:
    requests:
      storage: 10Gi

storageclass.storage.k8s.io "kadalu.storage-pool-1" not found

Environment:
kubectl kadalu version
kubectl-kadalu plugin: 1.2.0
kadalu pod(s) versions
pod/kadalu-csi-nodeplugin-65xpp: 1.2.0
pod/kadalu-csi-nodeplugin-87sgg: 1.2.0
pod/kadalu-csi-nodeplugin-fhbxn: 1.2.0
pod/kadalu-csi-nodeplugin-hzpjc: 1.2.0
pod/kadalu-csi-nodeplugin-p9bv9: 1.2.0
pod/kadalu-csi-nodeplugin-qhpvp: 1.2.0
pod/kadalu-csi-nodeplugin-wnm94: 1.2.0
pod/kadalu-csi-provisioner-0: 1.2.0
pod/operator-58ddcb697c-4hh84: 1.2.0
pod/server-storage-pool-1-0-0: 1.2.0
pod/server-storage-pool-1-1-0: 1.2.0
pod/server-storage-pool-1-2-0: 1.2.0

Screenshots or Logs
изображение
изображение

@TristisOris
Copy link
Author

TristisOris commented Feb 21, 2024

i removed everything about kadalu, install again and storage was created. So that was something at configs from first installation. Weird that it affect any new configs.
But now storage pool pods can't start normally

[2024-02-21 08:15:55.823309 +0000] E [MSGID: 101018] [xlator.c:643:xlator_init] 0-storage-pool-1-posix: Initialization of volume failed. review your volfile again. [{name=storage-pool-1-posix}]
[2024-02-21 08:15:55.823333 +0000] E [MSGID: 101064] [graph.c:476:glusterfs_graph_init] 0-storage-pool-1-posix: initializing translator failed
[2024-02-21 08:15:55.823339 +0000] E [MSGID: 101174] [graph.c:825:glusterfs_graph_activate] 0-graph: init failed
[2024-02-21 08:15:55.823371 +0000] I [io-stats.c:4200:fini] 0-/bricks/storage-pool-1/data/brick: io-stats translator unloaded
[2024-02-21 08:15:55.823537 +0000] I [barrier.c:642:fini] 0-storage-pool-1-barrier: Disabling barriering and dequeuing all the queued fops
[2024-02-21 08:15:55.823993 +0000] W [glusterfsd.c:1501:cleanup_and_exit] (-->/opt/sbin/glusterfsd(+0x10706) [0x5555de9a1706] -->/opt/sbin/glusterfsd(glusterfs_process_volfp+0x258) [0x5555de9a1398] -->/opt/sbin/glusterfsd(cleanu
p_and_exit+0x57) [0x5555de99bf67] ) 0-: received signum (-1), shutting down
[2024-02-21 08:15:55.832065 +0000] E [name.c:383:af_inet_client_get_remote_sockaddr] 0-storage-pool-1-client-2: DNS resolution failed on host server-storage-pool-1-2-0.storage-pool-1
[2024-02-21 08:15:56,805] INFO [kadalulib - 432:monitor_proc] - Restarted Process        name=glusterfsd
[2024-02-21 08:15:56.814455 +0000] I [MSGID: 100030] [glusterfsd.c:2947:main] 0-/opt/sbin/glusterfsd: Started running version [{arg=/opt/sbin/glusterfsd}, {version=2023.10.03}, {cmdlinestr=/opt/sbin/glusterfsd -N --volfile-id st
orage-pool-1.node3.bricks-storage-pool-1-data-brick -p /var/run/gluster/glusterfsd-bricks-storage-pool-1-data-brick.pid -S /var/run/gluster/brick.socket --brick-name /bricks/storage-pool-1/data/brick -l - --xlator-option *-posix
.glusterd-uuid=node-0 --process-name brick --brick-port 24007 --xlator-option storage-pool-1-server.listen-port=24007 -f /var/lib/kadalu/volfiles/storage-pool-1.node3.bricks-storage-pool-1-data-brick.vol}]
[2024-02-21 08:15:56.814717 +0000] I [glusterfsd.c:2637:daemonize] 0-glusterfs: Pid of current running process is 494
[2024-02-21 08:15:56.819725 +0000] I [socket.c:916:__socket_server_bind] 0-socket.glusterfsd: closing (AF_UNIX) reuse check socket 7
[2024-02-21 08:15:56.820048 +0000] I [MSGID: 0] [glusterfsd.c:1671:volfile_init] 0-glusterfsd-mgmt: volume not found, continuing with init
[2024-02-21 08:15:56.822856 +0000] I [rpcsvc.c:2708:rpcsvc_set_outstanding_rpc_limit] 0-rpc-service: Configured rpc.outstanding-rpc-limit with value 64
[2024-02-21 08:15:56.823366 +0000] I [io-stats.c:3794:ios_sample_buf_size_configure] 0-/bricks/storage-pool-1/data/brick: Configure ios_sample_buf  size is 1024 because ios_sample_interval is 0
[2024-02-21 08:15:56.823910 +0000] E [MSGID: 113063] [posix-common.c:839:posix_init] 0-storage-pool-1-posix: mismatching volume-id (efcf1b38-d090-11ee-bc76-42d1bf565816) received. already is a part of volume 5f796784-bc1b-11ee-a
19d-ae3c1976d0e5
[2024-02-21 08:15:56.824160 +0000] E [MSGID: 101018] [xlator.c:643:xlator_init] 0-storage-pool-1-posix: Initialization of volume failed. review your volfile again. [{name=storage-pool-1-posix}]
[2024-02-21 08:15:56.824183 +0000] E [MSGID: 101064] [graph.c:476:glusterfs_graph_init] 0-storage-pool-1-posix: initializing translator failed
[2024-02-21 08:15:56.824190 +0000] E [MSGID: 101174] [graph.c:825:glusterfs_graph_activate] 0-graph: init failed
[2024-02-21 08:15:56.824221 +0000] I [io-stats.c:4200:fini] 0-/bricks/storage-pool-1/data/brick: io-stats translator unloaded
[2024-02-21 08:15:56.824437 +0000] I [barrier.c:642:fini] 0-storage-pool-1-barrier: Disabling barriering and dequeuing all the queued fops
[2024-02-21 08:15:56.824925 +0000] W [glusterfsd.c:1501:cleanup_and_exit] (-->/opt/sbin/glusterfsd(+0x10706) [0x555816236706] -->/opt/sbin/glusterfsd(glusterfs_process_volfp+0x258) [0x555816236398] -->/opt/sbin/glusterfsd(cleanu
p_and_exit+0x57) [0x555816230f67] ) 0-: received signum (-1), shutting down
[2024-02-21 08:15:57,808] INFO [kadalulib - 432:monitor_proc] - Restarted Process        name=glusterfsd
[2024-02-21 08:15:57.817367 +0000] I [MSGID: 100030] [glusterfsd.c:2947:main] 0-/opt/sbin/glusterfsd: Started running version [{arg=/opt/sbin/glusterfsd}, {version=2023.10.03}, {cmdlinestr=/opt/sbin/glusterfsd -N --volfile-id st
orage-pool-1.node3.bricks-storage-pool-1-data-brick -p /var/run/gluster/glusterfsd-bricks-storage-pool-1-data-brick.pid -S /var/run/gluster/brick.socket --brick-name /bricks/storage-pool-1/data/brick -l - --xlator-option *-posix
.glusterd-uuid=node-0 --process-name brick --brick-port 24007 --xlator-option storage-pool-1-server.listen-port=24007 -f /var/lib/kadalu/volfiles/storage-pool-1.node3.bricks-storage-pool-1-data-brick.vol}]
[2024-02-21 08:15:57.817570 +0000] I [glusterfsd.c:2637:daemonize] 0-glusterfs: Pid of current running process is 502
[2024-02-21 08:15:57.823351 +0000] I [socket.c:916:__socket_server_bind] 0-socket.glusterfsd: closing (AF_UNIX) reuse check socket 7
[2024-02-21 08:15:57.824113 +0000] I [MSGID: 0] [glusterfsd.c:1671:volfile_init] 0-glusterfsd-mgmt: volume not found, continuing with init
[2024-02-21 08:15:57.827098 +0000] I [rpcsvc.c:2708:rpcsvc_set_outstanding_rpc_limit] 0-rpc-service: Configured rpc.outstanding-rpc-limit with value 64
[2024-02-21 08:15:57.827990 +0000] I [io-stats.c:3794:ios_sample_buf_size_configure] 0-/bricks/storage-pool-1/data/brick: Configure ios_sample_buf  size is 1024 because ios_sample_interval is 0
[2024-02-21 08:15:57.828689 +0000] E [MSGID: 113063] [posix-common.c:839:posix_init] 0-storage-pool-1-posix: mismatching volume-id (efcf1b38-d090-11ee-bc76-42d1bf565816) received. already is a part of volume 5f796784-bc1b-11ee-a
19d-ae3c1976d0e5

@leelavg
Copy link
Collaborator

leelavg commented Apr 12, 2024

already is a part of volume 5f796784-bc1b-11ee-a19d-ae3c1976d0e5

  • bricks weren't cleaned properly, pls force cleanup the backend path before reusing it or else use volume_id: 5f796784-bc1b-11ee-a19d-ae3c1976d0e5 in KadaluStorage CR

storageclass.storage.k8s.io "kadalu.storage-pool-1" not found

  • operator logs would help to debug this further

unfortunately, logs presented above doesn't help to come to any conclusion.

@leelavg leelavg added the on-user pending on user label Apr 12, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
on-user pending on user
Projects
None yet
Development

No branches or pull requests

2 participants