Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

modifying kube_config_dir: to non default directory doesnt modify the default location during kubeadm init #11064

Open
nx2804 opened this issue Apr 8, 2024 · 1 comment
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@nx2804
Copy link

nx2804 commented Apr 8, 2024

What happened?

Hi Team,

I have updated the variable kube_config_dir: /srv1/etc/kubernetes but during kubeadm init the kubeconfig path is still directed to /etc/kubernetes. and all admin.conf, manifests are getting created under /etc/kubernetes instead of /srv1/etc/kubernetes.

kubeinit logs

[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"

What did you expect to happen?

the kubeconfig should be updated in the directory as per variable value

How can we reproduce it (as minimally and precisely as possible)?

update kube_config_dir variable in inventory/sample/group_vars/k8s_cluster/k8s_cluster.yml and execute cluster.yml playbook all the configurations should be created and initialized in the new kube_config_dir instead of /etc/kubernetes

OS

Rocky linux 9

Version of Ansible

2.14

Version of Python

3.6.8

Version of Kubespray (commit)

1.28

Network plugin used

calico

Full inventory with variables

ansible-playbook -i /inventory/sample/hosts.yaml -b -v cluster.yml --become-user=root -e ansible_python_interpreter=/bin/python3 -e container_manager=containerd -e kube_config_dir=/srv/etc/kubernetes

Command used to invoke ansible

ansible-playbook -i /inventory/sample/hosts.yaml -b -v cluster.yml --become-user=root -e ansible_python_interpreter=/bin/python3 -e container_manager=containerd -e kube_config_dir=/srv/etc/kubernetes

Output of ansible run

kubeinit logs

[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"

Anything else we need to know

No response

@nx2804 nx2804 added the kind/bug Categorizes issue or PR as related to a bug. label Apr 8, 2024
@neolit123
Copy link
Member

I have updated the variable kube_config_dir: /srv1/etc/kubernetes but during kubeadm init the kubeconfig path is still directed to /etc/kubernetes. and all admin.conf, manifests are getting created under /etc/kubernetes instead of /srv1/etc/kubernetes.

not a bug as the /etc/kubernetes path is hardcoded in kubeadm by design.
the only way to override is to to use the kubeadm --rootfs flag which performs a chroot.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

2 participants