Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Resource limits forbid running of the syncer #1746

Open
Smidra opened this issue May 7, 2024 · 7 comments
Open

Resource limits forbid running of the syncer #1746

Smidra opened this issue May 7, 2024 · 7 comments

Comments

@Smidra
Copy link

Smidra commented May 7, 2024

What happened?

  • I downloaded my values.yaml from GitHub
  • I enabled quotas at "policies: > resourceQuota: > enabled: true"
  • I created the vCluster with helm as follows
    helm upgrade --install vcluster-r05 vcluster --version 0.20.0-beta.2 --values vcluster.yaml --repo https://charts.loft.sh --namespace vcluster-r05 --create-namespace --repository-config='' --wait --wait-for-jobs
  • The syncer is not created. Stateful set has 0/1 pods ready. The events state that:
    create Pod vcluster-r05-0 in StatefulSet vcluster-r05 failed error: pods "vcluster-r05-0" is forbidden: failed quota: vc-vcluster-r05: must specify limits.cpu for: syncer

What did you expect to happen?

  • The vCluster is created without a problem.

How can we reproduce it (as minimally and precisely as possible)?

See above.

Anything else we need to know?

It can be a good idea to set CPU requests and not set CPU limits. With vCluster, we do that in the syncer limits (controlPlane: > statefulSet: > resources: > limits:) definition. In the values.yaml there should also be a way to disable cpu limits.

What do you think about making the CPU unlimited by default?

Host cluster Kubernetes version

$ kubectl version
Client Version: v1.29.3
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.28.6

Host cluster Kubernetes distribution

default - k8s

vlcuster version

$ vcluster --version
# paste output here

Vcluster Kubernetes distribution(k3s(default)), k8s, k0s)

Default - k8s

OS and Arch

OS: Ubuntu
Arch: x86
@Smidra Smidra added the kind/bug label May 7, 2024
@deniseschannon deniseschannon added the bug label May 8, 2024 — with Linear
Copy link
Contributor

Thanks for reporting this. I will discuss it within team.

@rohantmp
Copy link
Contributor

This is a kubernetes limitation, not a vcluster one:

If quota is enabled in a namespace for compute resources like cpu and memory, users must specify requests or limits for those values; otherwise, the quota system may reject pod creation. Hint: Use the LimitRanger admission controller to force defaults for pods that make no compute resource requirements.

( https://kubernetes.io/docs/concepts/policy/resource-quotas/ )

@rohantmp
Copy link
Contributor

rohantmp commented May 10, 2024

Ah's an issue that we don't have a default cpu limit, so enabling resource quotas without writing in an cpu.limit doesn't work

@rohantmp
Copy link
Contributor

Like you said, I don't think it's a great idea to have a default cpu limit, so I think this is better left alone for now, but will see about automatically adding one if resourceQuota is enabled

@Smidra
Copy link
Author

Smidra commented May 10, 2024

If the CPU limit would enable only when the Resource Quota is enabled it would be a wonderful solution. Good idea @rohantmp

@FabianKramm
Copy link
Member

Hey @Smidra ! Is there a reason you enabled resource quota but not the limit range via policies.limitRange.enabled, because this should set the missing cpu automatically.

@linear linear bot added kind/question and removed kind/bug labels May 14, 2024
@Smidra
Copy link
Author

Smidra commented May 14, 2024

Hello @FabianKramm, you are correct that enabling LimitRange fixes this problem. 👍

The reason for my complaint is that "Limit Range" is disabled by default and there are no syncher CPU limits by default. If you just enable quotas, than vcluster will "mysteriously" hang because StatefulSet will become stuck.

In my opinion this is a strange default behavior. If you decide not to change it, we should at least adress it in the comment of the values file or in the documentation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants