Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Alertmanager should be cgroup aware #3735

Open
uhthomas opened this issue Feb 25, 2024 · 0 comments · May be fixed by #3736
Open

Alertmanager should be cgroup aware #3735

uhthomas opened this issue Feb 25, 2024 · 0 comments · May be fixed by #3736

Comments

@uhthomas
Copy link

uhthomas commented Feb 25, 2024

What did you do?

I recently changed the CPU in my server from an Intel 13600k with 20 threads to an EPYC 7763 with 128 threads. I run alertmanager in Kubernetes with a CPU limit of 100m.

What did you expect to see?

No change in performance.

What did you see instead? Under which circumstances?

Go is not cgroup aware. This means that Prometheus is being throttled in containerised environments with CPU limits.

The first half of the time series is with the 13600k, and the second half is with the EPYC 7763.

image

automaxprocs can use cgroups automatically.

Environment

  • System information:
Linux 6.1.69-talos x86_64
  • Alertmanager version:
alertmanager, version 0.26.0 (branch: main, revision: 7a3c189315da7b8ed2d5e05f122c4cf4873c1379)
  build user:       root@9178099891ea
  build date:       20240123-14:29:39
  go version:       go1.21.6
  platform:         linux/amd64
  tags:             netgo
  • Prometheus version:

N/A

  • Alertmanager configuration file:

N/A

  • Prometheus configuration file:

N/A

  • Logs:

N/A

uhthomas added a commit to uhthomas/alertmanager that referenced this issue Feb 25, 2024
Go is not cgroup aware and by default will set GOMAXPROCS to the number
of available threads, regardless of whether it is within the allocated
quota. This behaviour causes high amount of CPU throttling and degraded
application performance.

Fixes: prometheus#3735
@uhthomas uhthomas linked a pull request Feb 25, 2024 that will close this issue
uhthomas added a commit to uhthomas/alertmanager that referenced this issue Feb 25, 2024
Go is not cgroup aware and by default will set GOMAXPROCS to the number
of available threads, regardless of whether it is within the allocated
quota. This behaviour causes high amount of CPU throttling and degraded
application performance.

Fixes: prometheus#3735

Signed-off-by: Thomas Way <[email protected]>
uhthomas added a commit to uhthomas/alertmanager that referenced this issue Feb 25, 2024
Go is not cgroup aware and by default will set GOMAXPROCS to the number
of available threads, regardless of whether it is within the allocated
quota. This behaviour causes high amount of CPU throttling and degraded
application performance.

Fixes: prometheus#3735

Signed-off-by: Thomas Way <[email protected]>
uhthomas added a commit to uhthomas/alertmanager that referenced this issue Feb 25, 2024
Go is not cgroup aware and by default will set GOMAXPROCS to the number
of available threads, regardless of whether it is within the allocated
quota. This behaviour causes high amount of CPU throttling and degraded
application performance.

Fixes: prometheus#3735

Signed-off-by: Thomas Way <[email protected]>
uhthomas added a commit to uhthomas/alertmanager that referenced this issue Feb 25, 2024
Go is not cgroup aware and by default will set GOMAXPROCS to the number
of available threads, regardless of whether it is within the allocated
quota. This behaviour causes high amount of CPU throttling and degraded
application performance.

Fixes: prometheus#3735

Signed-off-by: Thomas Way <[email protected]>
uhthomas added a commit to uhthomas/alertmanager that referenced this issue Feb 26, 2024
Go is not cgroup aware and by default will set GOMAXPROCS to the number
of available threads, regardless of whether it is within the allocated
quota. This behaviour causes high amount of CPU throttling and degraded
application performance.

Fixes: prometheus#3735

Signed-off-by: Thomas Way <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant