Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

IPv6 first Dualstack bootstrap #1200

Closed
abasitt opened this issue Jan 19, 2024 · 1 comment
Closed

IPv6 first Dualstack bootstrap #1200

abasitt opened this issue Jan 19, 2024 · 1 comment

Comments

@abasitt
Copy link

abasitt commented Jan 19, 2024

Really happy to see the dualstack support and it bootstrap my cluster like a charm when i try.
{ipv4},{ipv6} combo or in another words I call it ipv4 first cluster.

But if i try {ipv6},{ipv4} ipv6 first cluster, it fails at the boostrap templates of cilium and coredns. Somehow the it is looking for 127.0.0.1 as kube-api address.

e.g. below from cilium
Error: Kubernetes cluster unreachable: Get "https://127.0.0.1:6444/version": dial tcp 127.0.0.1:6444: connect: connection refused

k3s itself seems to be fine and up.
k3s.service - Lightweight Kubernetes
Loaded: loaded (/etc/systemd/system/k3s.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2024-01-19 20:03:58 +08; 2min 1s ago
Docs: https://k3s.io
Main PID: 3904 (k3s-server)
Tasks: 50
Memory: 899.1M
CPU: 30.004s
CGroup: /system.slice/k3s.service

 ----

final msg of the failure.
TASK [Coredns | Wait for Coredns to rollout] **************************************************************************************************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ansible_collections.kubernetes.core.plugins.module_utils.k8s.exceptions.CoreException: Failed to gather information about Job(s) even after waiting for 360 seconds
fatal: [k3s-m1]: FAILED! => {"changed": false, "msg": "Failed to gather information about Job(s) even after waiting for 360 seconds"}

NO MORE HOSTS LEFT ****************************************************************************************************************************************************************************************************

PLAY RECAP ************************************************************************************************************************************************************************************************************
k3s-m1 : ok=78 changed=16 unreachable=0 failed=1 skipped=59 rescued=0 ignored=0 .

Any tips to make the template work?

@onedr0p
Copy link
Owner

onedr0p commented Jan 19, 2024

There is definitely more work to be done on IPv6 as laid out in #1148. One major blocker is that Cilium does not support L2 announcements with IPv6.

Since I have no way to test IPv6 (single/dual or others), I need to lean on the community and people who use the template to help implement and test this functionality (PRs are accepted). Reading the k3s docs on network is a good start. However since I do not use flannel in the template, some of those docs can be omitted.

A few things there stood out to me in that doc, especially this:

When defining cluster-cidr and service-cidr with IPv6 as the primary family, the node-ip of all cluster members should be explicitly set, placing node's desired IPv6 address as the first address. By default, the kubelet always uses IPv4 as the primary address family.

Repository owner locked and limited conversation to collaborators Jan 19, 2024
@onedr0p onedr0p converted this issue into discussion #1202 Jan 19, 2024

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants