You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have updated the variable kube_config_dir: /srv1/etc/kubernetes but during kubeadm init the kubeconfig path is still directed to /etc/kubernetes. and all admin.conf, manifests are getting created under /etc/kubernetes instead of /srv1/etc/kubernetes.
kubeinit logs
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
What did you expect to happen?
the kubeconfig should be updated in the directory as per variable value
How can we reproduce it (as minimally and precisely as possible)?
update kube_config_dir variable in inventory/sample/group_vars/k8s_cluster/k8s_cluster.yml and execute cluster.yml playbook all the configurations should be created and initialized in the new kube_config_dir instead of /etc/kubernetes
I have updated the variable kube_config_dir: /srv1/etc/kubernetes but during kubeadm init the kubeconfig path is still directed to /etc/kubernetes. and all admin.conf, manifests are getting created under /etc/kubernetes instead of /srv1/etc/kubernetes.
not a bug as the /etc/kubernetes path is hardcoded in kubeadm by design.
the only way to override is to to use the kubeadm --rootfs flag which performs a chroot.
What happened?
Hi Team,
I have updated the variable kube_config_dir: /srv1/etc/kubernetes but during kubeadm init the kubeconfig path is still directed to /etc/kubernetes. and all admin.conf, manifests are getting created under /etc/kubernetes instead of /srv1/etc/kubernetes.
kubeinit logs
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
What did you expect to happen?
the kubeconfig should be updated in the directory as per variable value
How can we reproduce it (as minimally and precisely as possible)?
update kube_config_dir variable in inventory/sample/group_vars/k8s_cluster/k8s_cluster.yml and execute cluster.yml playbook all the configurations should be created and initialized in the new kube_config_dir instead of /etc/kubernetes
OS
Rocky linux 9
Version of Ansible
2.14
Version of Python
3.6.8
Version of Kubespray (commit)
1.28
Network plugin used
calico
Full inventory with variables
ansible-playbook -i /inventory/sample/hosts.yaml -b -v cluster.yml --become-user=root -e ansible_python_interpreter=/bin/python3 -e container_manager=containerd -e kube_config_dir=/srv/etc/kubernetes
Command used to invoke ansible
ansible-playbook -i /inventory/sample/hosts.yaml -b -v cluster.yml --become-user=root -e ansible_python_interpreter=/bin/python3 -e container_manager=containerd -e kube_config_dir=/srv/etc/kubernetes
Output of ansible run
kubeinit logs
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
Anything else we need to know
No response
The text was updated successfully, but these errors were encountered: