Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

IPv6 quirks and other notes #986

Open
vosdev opened this issue Jan 19, 2025 · 4 comments
Open

IPv6 quirks and other notes #986

vosdev opened this issue Jan 19, 2025 · 4 comments
Assignees

Comments

@vosdev
Copy link

vosdev commented Jan 19, 2025

Heyhey,

So I have just deployed a cluster with dualstack IPv4 and IPv6. Supplying only an IPv6 cidr for the built-in loadbalancer as my network is primarily IPv6 and has IPv4 for legacy reasons only.

Cilium Ingress

Unfortunately the built-in cilium ingress service is set to SingleStack IPv4.

I thought maybe you would want to put a note of this in the docs, either on the IPv6 only page or on the ingress page

Unfortunately setting it to PreferDualStack with IPv4 first in the list will not supply a IPv6 address because it fails to supply an IPv4 address.. I am also unable to change the IP Family to IPv6 because that gets rejected. I'm kinda forced to add IPv4 to my loadbalancer to get the ingress to work.

Other than that I am super happy with the whole bootstrap process. To get IPv6 to work on microk8s was messy. This is a huge improvement :)

Dualstack, primary IPv6

This works:

pod-cidr: 172.20.16.0/20,fd01::/108
service-cidr: 172.20.0.0/24,fd98::/108

This results in a failed bootstrap with a kube api server error:

pod-cidr: fd01::/108,172.20.16.0/20
service-cidr: fd98::/108,172.20.0.0/24

Supplying ipv6 addresses first is a way to make IPv6 primary on a kubernetes cluster. This seems not to be supported?

The error was: failed to POST /k8sd/cluster: failed to bootstrap new cluster: Failed to run post-bootstrap actions: kube-apiserver did not become ready in time: kubernetes endpoints not ready yet: context deadline exceeded

Documentation

I assume that the bootstrap config documentation is still a work in progress? There are no explanations for any of the config options and their possible values.

Cilium

Finally, I think it would be a good idea to include the cilium cli so that instead of the suggested:

sudo k8s kubectl exec -it cilium-97vcw -n kube-system -c cilium-agent \
  -- cilium status

we could do k8s cilium status and have the k8s tool select the proper pod and container for us.

Cilium loadbalancer

Seeing how you use Cilium for CNI and ingress (I didn't even know that was a possibility), why not also use it for the loadbalancer instead of metallb?

https://docs.cilium.io/en/stable/network/lb-ipam/

Gateway

Nowhere on the documentation does it say what this feature is. Or at least I could not find it in the 10 minutes I searched. I had to go through this repo and look at the code to see what it would do. It seems to enable Cilium Gateway API.

NGINX vs Cilium ingress

Another quirk I discovered is that with nginx ingress the following config sufficed in my ingress records:

  tls:
    - secretName: letsencrypt-wildcard

For cilium ingress this does not work. You have to actually specify the hostnames eventhough my certificate is a wildcard

  tls:
    - secretName: letsencrypt-wildcard
      hosts:
      - sub.example.com

It's more cilium related than canonical k8s but as I am moving from microk8s to k8s i was surprised to see that my ingress records did not work and I tracked it down to this specific config part.

@bschimke95
Copy link
Contributor

bschimke95 commented Jan 20, 2025

Hey @vosdev,

Thank you so much for your detailed feedback! It’s incredibly valuable and helps us improve the k8s-snap experience. We truly appreciate your input.

Regarding your comments:

@bschimke95 bschimke95 self-assigned this Jan 20, 2025
@vosdev
Copy link
Author

vosdev commented Jan 22, 2025

Cilium Load Balancer: We initially attempted to use Cilium’s built-in load balancer but faced IPv6-related issues. For now, we’ve opted to use MetalLB instead.

Ah, that's a shame. When Cilium still shipped metallb bundled with cilium, I was facing IPv6 related issues with the metallb implementation. Hoped that the new cilium bgp plane would work better.

Unfortunately around 50% of the issues I report on GitHub are IPv6 related...

Cilium Ingress: IPv6 support for Cilium ingress should be working (we have an integration test for it), but our documentation may need clarification. I’ll review the docs and see how we can make this clearer.

I wonder how, I'll keep an eye on the docs. Until then I will keep manually editing the service to make it dualstack instead of singlestack ipv4. Do you test cilium ingress singlestack ipv4 and ipv6 or dualstack ipv4/ipv6?. The only ipv6 related test I could find was a separate nginx service with RequireDualStack, nothing related to the cilium ingress.

TIL about gateway api, i'll look into it!

Thanks for the detailed response :)

@berkayoz
Copy link
Member

Hey @vosdev,

After checking I can confirm we don't have a test for cilium ingress for dualstack yet. Regarding the ipFamilyPolicy issue, it seems like this setting is missing on the upstream manifest.

We will look into contributing the options/fields and add a mention of manual patching in our docs in the meantime.

I've tested the ingress with PreferDualStack and got an IPv6 address on the service. We'll also be adding a test for the ingress on dual stack. To make sure we're not missing something, do you desire to have your ingress service as IPv6? Or is the main goal having a dual stack service?

Thanks!

@vosdev
Copy link
Author

vosdev commented Jan 27, 2025

My desire was to be IPv6 only, which currently does not work because it is SingleStack and IPv4. Updating IPv4 -> IPv6 gets rejected :(. I haven't had the time to look into it for a few days

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants