-
Notifications
You must be signed in to change notification settings - Fork 203
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
memory leak when vcenter not reachable #282
Comments
@RyanW8 What version? |
I had a lot of issues where the exporter would run into race conditions where scrape_timeout would cause the exporter to timeout however the connections seemed to stay and causing the memory to raise. However changing the threads for reactor/twisted made it much more stable for our environment. The exporter rarely ever exceeds the scrape_timeout (118s) anymore. I'm running the exporter in k8s if that makes any difference. I changed suggestedThreadPoolSize from 25 to 1 and that actually lowered the scrape duration time.
Our environment with approx 200 hosts and ~ 2000vm's takes around 30s to scrape (including datastores) |
@pryorda should we return empty page when vcenter is not reached |
Thanks to the exporter monitoring feature introduced in #128, I noticed a memory leakage when the exporter can't reach the vCenter. The leaked memory was recovered when the connection resumed (garbage collection).
This occurred when experimenting vmware_exporter with an updated version of the library Prometheus client version 0.11.0.
The leakage has to be confirmed with the current version setup (vmware_exporter with client_python 0.0.19).
The text was updated successfully, but these errors were encountered: