-
Notifications
You must be signed in to change notification settings - Fork 195
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Alpine (musl) based haproxy ingress images performance issue #541
Comments
I tested it on my Raspberry Pi but did not encounter such a huge performance difference. What TLS ciphers were used in the graph above? |
In both cases the same haproxy config was used with TLS options:
Both Haproxy itslef and Upstream (single one in test above) use 4096 bit length TLS certificates (annotaion "haproxy.org/server-ssl: "true" is configured in ingress) K8s nodes: KVM VMs (Ubuntu 20.04.4 LTS, 5.4.0-109-generic, k8s version v1.23.4) PODs:
Haproxy: nbthread: "8" IMHO TLS handshakes should not play a great deal with keepalive connections used on both ends: client<->haproxy AND haproxy<->upstream |
@amorozkin I am reasonably sure this is not related to Alpine MUSL at all, but related to OpenSSL 3.0/3.1 mutex contention issues. I suspect your Glibc-based distribution is using OpenSSL 1.1.1, isn't it? |
Could you please consider adding an option to use non-alpine based haproxy ingress images?
Alpine's PTHREAD implementaion has a drasitc CPU overhead - (internals/details can be found here https://stackoverflow.com/questions/73807754/how-one-pthread-waits-for-another-to-finish-via-futex-in-linux/73813907#73813907 )
Here are two strace statistics samples for the same load profile (25K RPS via 3 haproxy ingress pods) for the equal period of time (about 1 minute):
1. GLIBC based haproxy
2. MUSL based haproxy:
As you can see - the last one (MUSL based one) - 60+% of time spends on futex (FUTEX_WAKE_PRIVATE to be exact) system calls.
As a reuslt - more than twice higher CPU utilisation on the same load profile acommpaned by upstream's sessions number spikes:
The text was updated successfully, but these errors were encountered: