Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Avoid subnets that don't have available IP Addresses #5234

Open
ellistarn opened this issue Dec 5, 2023 · 11 comments · May be fixed by #7310
Open

Avoid subnets that don't have available IP Addresses #5234

ellistarn opened this issue Dec 5, 2023 · 11 comments · May be fixed by #7310
Labels
feature New feature or request

Comments

@ellistarn
Copy link
Contributor

ellistarn commented Dec 5, 2023

Description

What problem are you trying to solve?

When a subnet is almost out of IPs, Karpenter will continue to launch nodes in it, leading to the VPC CNI failing to become ready, and the node becoming unready as well. In many cases, there's nothing we can do, but if another subnet has IP addresses, and the workload does not have scheduling constraints that prevent it from running in those zones, we should launch in the subnets with available IPs.

How important is this feature to you?

Managing the ipv4 space is hard, and anything we can do to alleviate these pains would help customers on their path to ipv6.

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment
@ellistarn ellistarn added feature New feature or request needs-triage Issues that need to be triaged and removed needs-triage Issues that need to be triaged labels Dec 5, 2023
@martinsmatthews
Copy link

@ellistarn if we have multiple available subnets available for a zone, will karpenter choose the least full subnet like it says in here https://karpenter.sh/v0.32/concepts/nodeclasses/#specsubnetselectorterms ?

@ellistarn
Copy link
Contributor Author

ellistarn commented Dec 5, 2023

Correct -- this is a good point. We don't do this for subnets in different zones.

@martinsmatthews
Copy link

Would be really good if we could do this, as this would/could also naturally balance instances across zones which would be a nice HA feature. We have seen that unless there is a topology spread in our deployments, multiple instances get spun up in the same zone.

@sthapa-ping
Copy link

This is one of the burning issue that we are currently facing with Karpenter, we discussed about this in reinvent. Checking available IPs across the subnets using AWS endpoint and scheduling next instances to the subnet with most IPs seems straight forward. Any approximate release timeline for when we can expect this feature to be added?

@ellistarn
Copy link
Contributor Author

Are you finding that there are 0 remaining IPs and the launch fails, or just a few IPs? Can you provide logs that occur when this happens? Is the failure at the node level? Pod level?

@martinsmatthews
Copy link

martinsmatthews commented Feb 12, 2024

Are you finding that there are 0 remaining IPs and the launch fails, or just a few IPs?

No we were seeing that without pod anti affiinity to force them to be spread across zones we'd often end up with multiple nodes in the 1 zone and none in the other 2 zones, then all the pods trying to spin up and exhaust the subnet and we see pods get stuck in pending as the CNI can't assign them an IP. Note that this is using lots of small (cpu/mem) pods, relatively small subnets (/26) and pod security groups. We weren't seeing node launches fail.

We're not seeing this issue as we moved to larger subnets and added the anti affinity which means the nodes are spread across the zones/subnets more evenly.

Am happy to recreate this and send some logs if that would be helpful @ellistarn ?

@ellistarn
Copy link
Contributor Author

@martinsmatthews , have you completely exhausted the ipv4 space? Is it possible to add another subnet with more IPs? Why are you so constrained?

I'm working on an idea for EKS networking and I'd love to chat more over slack.

@martinsmatthews
Copy link

Hi @ellistarn sorry this was just an example which would highlight the issue we're discussing, we don't have this problem any more as we just gave this nodepool 3 /25s and all is fine now.

Fwiw, we run a number of different nodepools per cluster, some with quite small numbers of pods. For example the one we were seeing this issue in runs no more than 40-60 pods at any one time so 3x /26 was enough IPs to allow this. We are not exactly resource constrained but at the same time we need to think about how much internal IP space we have an allocate it sensibly as it is a finite resource.

Definitely happy to chat on Slack - will ping you.

@martinsmatthews
Copy link

Coming back to the use of topologySpreadConstraints to solve this issue and stripe pods over AZs (and thus over subnets) this doesn't work perfectly. I spun up 10 deployments with 3 replicas and with a topologySpreadConstraints like (plus a large enough resource ask to end up with only one pod per node) with a nodepool that had 3 subnets defined, 1 in each of 3 AZs:

      topologySpreadConstraints:
        - maxSkew: 1
          topologyKey: topology.kubernetes.io/zone
          whenUnsatisfiable: ScheduleAnyway
          labelSelector:
            matchLabels:
              app: topology-test-{{ count }}

and this gave me 30 nodes, 10 in each AZ - bingo. But... then I dropped the replica count to 2 and spun up 15 deployments and this gave me a very uneven spread of nodes:

  • us-west-2a: 19
  • us-west-2b: 7
  • us-west-2c: 14

Obviously this is artificial, but this does again highlight a need for an option to balance nodes across AZs even if this is not the default. And not just for subnet usage reasons, there is an HA risk here for when we lose a zone - it doesn't happen often, but when it does and we were skewed like this in a production cluster, it wouldn't be pretty.

@Shadowssong
Copy link

Has anyone come up with a solution to work around this? We have twice run into a situation where Karpenter overloads a single AZ (with two /20 subnets) and both subnets run out of IP's. The skew was quite extreme (200 nodes in 1 AZ, 50 in the other 3) and it seems like a burst of spot requests may have put them all on the same AZ, but it still seems odd that Karpenter doesn't attempt to do any kind of load balancing across the AZ's. Our current work around is to define a nodepool per AZ and use the cpu/memory limits to faux-limit the number of nodes to prevent IP exhaust, but this results in a lot of nodepools and ec2nodeclass's. This only works because we currently use a fairly strict set of instance types so we know what their max pod count would be. If anyone has come up with a better solution please share!

@snieg
Copy link

snieg commented Sep 25, 2024

We have twice run into a situation where Karpenter overloads a single AZ (with two /20 subnets) and both subnets run out of IP's.
Same issue here.

After migrating to karpenter, it started favoring one zone (eu-west-1c), despite having over 1000 addresses available in other subnets, which ended up with us running out of addresses in eu-west-1c.

╰─❯ kubectl get nodes -L  worker-type,group,topology.kubernetes.io/zone --sort-by=.metadata.creationTimestamp --no-headers   -l group=default | awk '{print $8}'  |sort | uniq -c
   7 eu-west-1a
   7 eu-west-1b
  25 eu-west-1c

different cluster but still running on cluster-autoscaler:

18 eu-west-1a
17 eu-west-1b
18 eu-west-1c

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature New feature or request
Projects
None yet
5 participants