Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

After clean Install: Port occupied #36

Open
dklueh79 opened this issue Oct 22, 2017 · 3 comments
Open

After clean Install: Port occupied #36

dklueh79 opened this issue Oct 22, 2017 · 3 comments

Comments

@dklueh79
Copy link

dklueh79 commented Oct 22, 2017

During ansible-playbook -i hosts kubernetes.yml:

ASK [kubernetes : Run kubeadm init on master] ************************************************************************************************************************************
fatal: [192.168.0.230]: FAILED! => {"changed": true, "cmd": ["kubeadm", "init", "--config", "/etc/kubernetes/kubeadm.yml"], "delta": "0:00:06.811351", "end": "2017-10-22 15:50:01.583502", "failed": true, "rc": 2, "start": "2017-10-22 15:49:54.772151", "stderr": "[preflight] Some fatal errors occurred:\n\tPort 10250 is in use\n\tPort 10251 is in use\n\tPort 10252 is in use\n\t/etc/kubernetes/manifests is not empty\n\tPort 2379 is in use\n\t/var/lib/etcd is not empty\n[preflight] If you know what you are doing, you can skip pre-flight checks with --skip-preflight-checks", "stdout": "[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.\n[init] Using Kubernetes version: v1.8.2-beta.0\n[init] Using Authorization modes: [Node RBAC]\n[preflight] Running pre-flight checks", "stdout_lines": ["[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.", "[init] Using Kubernetes version: v1.8.2-beta.0", "[init] Using Authorization modes: [Node RBAC]", "[preflight] Running pre-flight checks"], "warnings": []}
to retry, use: --limit @/root/k8s-pi/kubernetes.retry

@rhuss
Copy link
Collaborator

rhuss commented Oct 23, 2017

Sorry, the current check for whether Kubernetes is running or not is a bit limited. It used kubectl get nodes and if this fails with error code 1 then it is assumed that no cluster is runnin gand kubeadm init is called again.

I think this should be made more robuts. Any idea ? (maybe we should run kubeadm upgrade plan or so ....)

@dklueh79
Copy link
Author

Any solution for completing your kubernetes setup?

@rhuss
Copy link
Collaborator

rhuss commented Oct 30, 2017

@dklueh79 wdym ? Actually, the current detection works when Kubernetes has been properly installed and nodes are running. So in this case the kubeadm init step is skipped. However when the initial setup didnt work and you are in a half-baked state, the kubectl get nodes fails, but kubeadm init also fails. One should properly do a full reset then.

So you should try a full reset when this error ocurs for you before trying again:

ansible-playbook -i hosts kubernetes-full-reset.yml

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants