Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

When creating two replicas in a deployment with volume mount, one pod become Error or CrashLoopBackOff stage. #124691

Open
Sivakajan-tech opened this issue May 4, 2024 · 2 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. sig/storage Categorizes an issue or PR as relevant to SIG Storage.

Comments

@Sivakajan-tech
Copy link

Sivakajan-tech commented May 4, 2024

What happened?

When creating a MySQL deployment with two replicas also volume mount added, after some time one pod becomes CrashLoopBackOff stage and another is running successfully.

Screenshot 2024-05-04 at 14 14 22

Also, when we delete the volume mount and try, both replicas are working the same. But the common volume can not be met.

What did you expect to happen?

Both pods should be in the same state.

How can we reproduce it (as minimally and precisely as possible)?

Create PV and PVC

apiVersion: v1
kind: PersistentVolume
metadata:
  name: mysql-pv
  labels:
    type: local
spec:
  persistentVolumeReclaimPolicy: Retain
  capacity:
    storage: 1Gi
  accessModes:
  - ReadWriteOnce
  hostPath:
    path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-pvc
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

Create the mysql deployment

apiVersion: apps/v1
kind: Deployment
metadata:
  name: mysql
spec:
  replicas: 2
  selector:
    matchLabels:
      app: mysql
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
      - image: mysql:latest
        name: mysql
        env:
        - name: MYSQL_ROOT_PASSWORD
          value: pwd
        ports:
        - containerPort: 3306
        volumeMounts:
        - name: mysql-storage
          mountPath: /var/lib/mysql
      volumes:
      - name: mysql-storage
        persistentVolumeClaim:
          claimName: mysql-pvc

After some seconds, you can see both pods running successfully.(k get pods)
Then open the bash of from the first pod. kubectl exec --stdin --tty mysql-pod-name-0 -- /bin/bash. Then open the mysql in that bash. mysql -u root -ppwd. Now you can access the MySQL. Play with it and create a db for reference.

Then exit from the pod-0 and open the bash of the second pod by kubectl exec --stdin --tty mysql-pod-name-1 -- /bin/bash. You can access the bash of pod-1. Now open the MySQL there. mysql -u root -ppwd. You will get an error like

bash-4.4# mysql -u root -ppwd
mysql: [Warning] Using a password on the command line interface can be insecure.
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2

In other way, try delete the pod-0(the workable pod).(kubectl delete pod pod_name_0). Then you can see another new pod(named pod-2) created. Now do the same procedure to access the mysql from both pods, you can observe that accessing the mysql from pod-1 will works with the already created db. and the pod-2 will expose the same error we observed in the pod-1 early.

Anything else we need to know?

Kubernetes version

$ kubectl version
Client Version: v1.29.3
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.29.0

Cloud provider

OS version

uname -a
Darwin Sivakajans-MacBook-Pro.local 23.3.0 Darwin Kernel Version 23.3.0: Wed Dec 20 21:30:44 PST 2023; root:xnu-10002.81.5~7/RELEASE_ARM64_T6000 arm64

Install tools

Container runtime (CRI) and version (if applicable)

Related plugins (CNI, CSI, ...) and versions (if applicable)

@Sivakajan-tech Sivakajan-tech added the kind/bug Categorizes issue or PR as related to a bug. label May 4, 2024
@k8s-ci-robot k8s-ci-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label May 4, 2024
@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

If a SIG or subproject determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label May 4, 2024
@pranav-pandey0804
Copy link

/sig storage

@k8s-ci-robot k8s-ci-robot added sig/storage Categorizes an issue or PR as relevant to SIG Storage. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels May 4, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. sig/storage Categorizes an issue or PR as relevant to SIG Storage.
Projects
None yet
Development

No branches or pull requests

3 participants