When creating two replicas in a deployment with volume mount, one pod become Error or CrashLoopBackOff stage. #124691
Labels
kind/bug
Categorizes issue or PR as related to a bug.
needs-triage
Indicates an issue or PR lacks a `triage/foo` label and requires one.
sig/storage
Categorizes an issue or PR as relevant to SIG Storage.
What happened?
When creating a MySQL deployment with two replicas also volume mount added, after some time one pod becomes
CrashLoopBackOff
stage and another is running successfully.Also, when we delete the volume mount and try, both replicas are working the same. But the common volume can not be met.
What did you expect to happen?
Both pods should be in the same state.
How can we reproduce it (as minimally and precisely as possible)?
Create PV and PVC
Create the mysql deployment
After some seconds, you can see both pods running successfully.(
k get pods
)Then open the bash of from the first pod.
kubectl exec --stdin --tty mysql-pod-name-0 -- /bin/bash
. Then open the mysql in that bash.mysql -u root -ppwd
. Now you can access the MySQL. Play with it and create a db for reference.Then exit from the pod-0 and open the bash of the second pod by
kubectl exec --stdin --tty mysql-pod-name-1 -- /bin/bash
. You can access the bash of pod-1. Now open the MySQL there.mysql -u root -ppwd
. You will get an error likeIn other way, try delete the pod-0(the workable pod).(
kubectl delete pod pod_name_0
). Then you can see another new pod(named pod-2) created. Now do the same procedure to access the mysql from both pods, you can observe that accessing the mysql from pod-1 will works with the already created db. and the pod-2 will expose the same error we observed in the pod-1 early.Anything else we need to know?
Kubernetes version
Cloud provider
OS version
Install tools
Container runtime (CRI) and version (if applicable)
Related plugins (CNI, CSI, ...) and versions (if applicable)
The text was updated successfully, but these errors were encountered: