Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

consider setting GOMAXPROCS in entrypoint / bootstrap.py #29472

Open
BenTheElder opened this issue May 9, 2023 · 22 comments
Open

consider setting GOMAXPROCS in entrypoint / bootstrap.py #29472

BenTheElder opened this issue May 9, 2023 · 22 comments
Assignees
Labels
area/prow/entrypoint Issues or PRs related to prow's entrypoint component area/prow Issues or PRs related to prow kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. sig/testing Categorizes an issue or PR as relevant to SIG Testing.

Comments

@BenTheElder
Copy link
Member

What would you like to be added:

We should consider the trade-offs in defaulting GOMAXPROCS in entrypoint. See https://kubernetes.slack.com/archives/CCK68P2Q2/p1683658558671579

Why is this needed:

To deal with golang/go#33803 ahead of go setting it.

Downsides: this can hide that the projects/tools themselves are not handling this, and go is tentatively doing this in the standard runtime anyhow ...

Projects can currently instead use https://github.com/uber-go/automaxprocs themselves until go is fixed. Kubernetes unit tests do this now.

I kinda lean towards "we should only do this if it's fixing widespread issues" and instead waiting on golang/go#33803 for a standardized approach.

But setting it today might help resolve flakiness in jobs that have CPU limits enabled in particular. We have evidence of this with Kubernetes unit tests previously, though those are a particularly resource heavy case.

@BenTheElder BenTheElder added area/prow Issues or PRs related to prow kind/feature Categorizes issue or PR as related to a new feature. area/prow/entrypoint Issues or PRs related to prow's entrypoint component labels May 9, 2023
@k8s-ci-robot k8s-ci-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label May 9, 2023
@dims
Copy link
Member

dims commented May 9, 2023

+1 to "we should only do this if it's fixing widespread issues"

+1 to "I'd just go ahead and migrate jobs and worry about this if we actually see issues, it's cheap to migrate a job back to the other cluster, since it's just one field in the config" Quote from @BenTheElder at:
https://kubernetes.slack.com/archives/CCK68P2Q2/p1683659191867329?thread_ts=1683658558.671579&cid=CCK68P2Q2

@sftim
Copy link
Contributor

sftim commented May 9, 2023

For whole-number CPU limits, is CPU Manager static assignment an option?

I presume the job is either CPU bound or close enough to it that static assignment would be OK. The Pod would have to be Guaranteed, though.

@BenTheElder
Copy link
Member Author

For whole-number CPU limits, is CPU Manager static assignment an option?

Aside from non-whole limits, it is possible to enable this on the GKE clusters at least.

I presume the job is either CPU bound or close enough to it that static assignment would be OK. The Pod would have to be Guaranteed, though.

Prow pods running on k8s infra are guaranteed QOS, enforced by a presubmit.

However we'd have to lessen assignment since some CPU is reserved for system overhead and we're requesting that remaining partial core in some cases. (This is also a serious problem with using a very different machine size on the EKS cluster vs the existing GKE clusters which all have the same machine size), we push our scheduleable bounds pretty hard.

I don't think this problem is actually widespread enough to warrant changes like this, and conversely if it were then we'd want go + prow to work well without requiring highly priviledged node configuration that may not be exposed in all cluster tools.

@kumiDa
Copy link

kumiDa commented Jun 3, 2023

/sig testing

@k8s-ci-robot k8s-ci-robot added sig/testing Categorizes an issue or PR as relevant to SIG Testing. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Jun 3, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 21, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 20, 2024
@BenTheElder
Copy link
Member Author

/remove-lifecycle rotten

This still hasn't changed in the go runtime and is still worth considering.
We could probably cut down on a lot of flakes across CI.

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Feb 21, 2024
@BenTheElder
Copy link
Member Author

NOTE: golang/go#33803 hasn't moved.

cc @pohly @aojea

@pohly
Copy link
Contributor

pohly commented Apr 16, 2024

No strong opinion either way from me here.

@aojea
Copy link
Member

aojea commented Apr 18, 2024

magic settings always fire back :/

We could probably cut down on a lot of flakes across CI.

or we don't and create new ones

@BenTheElder
Copy link
Member Author

BenTheElder commented Apr 18, 2024

magic settings always fire back :/

Eh? We also control things like GOPROXY / GOSUM, the user ID, and lots of other ambient values / environment settings

or we don't and create new ones

How do you think this will introduce flakes?

The fundamental problem right now is the go runtime is being over-provisioned versus the CPU assigned to the workload (in this case the e2e tests). It is a well-known mitigation that many projects use to either manually set GOMAXPROCS to match your environment configuration or use the uber library to automatically read cgroups and set this.

We're relying on this fix ourselves in multiple places already including kubernetes/kubernetes unit tests / builds (the makefiles set it with the uber library) and registry.k8s.io (we manually set it to match the cloud run environment)

@aojea
Copy link
Member

aojea commented Apr 18, 2024

GOMAXPROCS sets the maximum number of CPUs that can be executing simultaneously and returns the previous setting

magic as changing the physical environment perceived by the runtime :)

How do you think this will introduce flakes?

I don't know, but my experience with this kind of changes at this low level is that always have to expect the unexpected, if we have specific places like hte integration tests or these things that seems a controlled environment, and if something fails you can always trace it back ... enabling something like this globally always can be harder to pinpoint if it causes unexpected problems

@BenTheElder
Copy link
Member Author

magic as changing the physical environment perceived by the runtime :)

It's not "magical", we're telling Go how many cores it should assume for scheduling, that's why the environment variable exists. Users can also override this.

I don't know, but my experience with this kind of changes at this low level is that always have to expect the unexpected, if we have specific places like the integration tests or these things that seems a controlled environment, and if something fails you can always trace it back ... enabling something like this globally always can be harder to pinpoint if it causes unexpected problems

anything using Go is subject to flakiness due to Go assuming it can schedule to M physical cores on the host when we have M < N limits on the container we're running in, this is why we already set it for k/k.

However, we don't set it for other subprojects or for other CI tests that are not using the k/k shell in any way.


So, to clarify, we're saying that prow should, when running CI jobs in Kubernetes pods, ensure that GOMAXPROCS is <= the CPU limits on the test container.

All of the environment details are logged in the job artifacts.

without setting this knob you will not reproduce flakes locally because they happen when go detects the full CPUs of the host but the job has been constrained, which causes thrashing in go's userspace scheduler. See golang/go#33803 and https://github.com/uber-go/automaxprocs

So Go sees "I'm on an 8 core VM" and we run the job with CPU limit = 2 cores. It doesn't detect the 2 core CPU throttling limit and makes a bad assumption about scheduling. This impacts any integration, unit, whatever.

Then you run the test locally and you have 32 cores and no CPU limits set, so you can't reproduce this flakiness. You would have to run in a container AND actually set CPU limits similar to Kubernetes AND be on a similarly sized host.

Whereas if we do set GOMAXPROCS in prow, well the number of cores available will never be identical to your local machine, but it CAN be that go is using the correct number of cores both in local testing and in CI and you should get comparable behavior.

Again: for the jobs you typically debug, we already do this, but for some other random subproject their tests are probably flaky without this being set, and that is worsened by SIG K8s Infra adopting more clusters with even larger hosts but the same or lower CPU limits on the jobs

@BenTheElder
Copy link
Member Author

@aojea
Copy link
Member

aojea commented Apr 18, 2024

ok , you convinced me , how do we track the rollout of this change to avoid we don't hit some weird edge case?

@BenTheElder
Copy link
Member Author

If we do this in entrypoint, it will be picked up in the next prow bump and we should watch that and announce in #sig-testing / #release-ci-signal / #testing-ops at minimum.

@aojea
Copy link
Member

aojea commented Apr 18, 2024

ok, then it is better to do it early in the release cycle or after the development cycle, so we have better signal

@BenTheElder
Copy link
Member Author

/assign

This is low-priority but something we should do I think.
It should not affect k/k CI since the relevant jobs are already running the in-tree auto-max-procs but we'll see.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 4, 2024
@BenTheElder
Copy link
Member Author

@BenTheElder BenTheElder removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 8, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 6, 2024
@BenTheElder
Copy link
Member Author

/lifecycle frozen

@k8s-ci-robot k8s-ci-robot added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Nov 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/prow/entrypoint Issues or PRs related to prow's entrypoint component area/prow Issues or PRs related to prow kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. sig/testing Categorizes an issue or PR as relevant to SIG Testing.
Projects
None yet
Development

No branches or pull requests

8 participants