-
Notifications
You must be signed in to change notification settings - Fork 203
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Very high memory consumption #133
Comments
That's doing only one collection at a time?
Sent from ProtonMail mobile
…-------- Original Message --------
On Aug 13, 2019, 9:28 AM, karlism wrote:
Hello,
vmware_exporter seems to be consuming very much RAM on our systems. Here's an example of memory consumption on VM that is running vmware exporter, drops in spikes are when I restart vmware exporter manually or systemd does that for me:
[image](https://user-images.githubusercontent.com/19568035/62953993-e4342700-bdee-11e9-96a0-382227da7149.png)
vmware exporter is installed from pypi:
# pip3 list | grep vmware-exporter
vmware-exporter (0.9.8)
Running on latest CentOS version:
# uname -a
Linux hostname 3.10.0-957.21.3.el7.x86_64 #1 SMP Tue Jun 18 16:35:19 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
# uname -r
3.10.0-957.21.3.el7.x86_64
Metrics are collected from single vCenter instance that manages 26 ESXi hosts with about 550 virtual machines.
Please let me know if I can provide any additional information.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, [view it on GitHub](#133?email_source=notifications&email_token=ABLCAHFUK5VMTCYO5G3VP73QELHLTA5CNFSM4ILMAVJKYY3PNVWWK3TUL52HS4DFUVEXG43VMWVGG33NNVSW45C7NFSM4HE7TXVQ), or [mute the thread](https://github.com/notifications/unsubscribe-auth/ABLCAHCMXVX72HVTSDINHQLQELHLTANCNFSM4ILMAVJA).
|
@pryorda, metrics are scraped every 90 seconds from two prometheus servers. |
I've just noticed that we had very old version of python |
@karlism Thanks for the update. I was going to ask if you could run this in a container and let me know. |
Hello, Unfortunately it seems that the problem has gone away on it's own, despite the fact that nothing has been changed on the server: So I'd say that it is safe to remove dependency to specific prometheus client library version here: As for original memory consumption issue, let's leave this ticket open for another week, let's see if the problem returns. I will report as soon as I have any updates on this. |
Are you running in a container or installed at the OS level? |
@pryorda, not a container, it's a VM running CentOS 7. |
The issue repeated again:
Couple of weeks ago ESXi hosts and vCenter have been updated to 6.7U3, to exclude the option that old software version on that side might cause the issues. |
I've also noticed errors in vmware exporter logs, this is after service restart and other than these errors in logs, exporter seems to be working fine:
|
That error is normally when the exporter tries to write to a socket that was closed - usually vmware was slow to respond, so prometheus timed out its connection to the exporter. A longer timeout on the prometheus side will help with this, but you might still get them if the vmware API has a slow blip. |
@Jc2k, you're right, scrape timeout for that job is set to 30 seconds and scrape durations averages around 20 seconds for that environment: |
Did some additional testing regarding errors in log files.
Also disabled
While running curl query loop, whenever scrape time reached about 10 seconds, we would get errors in |
To make sure I understand the issue:
If this is the case we might be able to look at setting higher timeouts in the connection. However, How many vms/hosts are you scraping? We average about 1-3s during our scrapes with 30 hosts and 800 VMs. I'm also wondering if your vcenter instance is undersized for your current environment. |
@pryorda, there are currently two issues:
As for the vCenter, currently it has 26 ESXi hosts across 6 different sites, with 587 VMs running across them. One thing to note is that some of the datacenters are located quite far, with latency reaching 180ms in some of them. We also have lab vCenter instance, which is also scraped by Can you please point me where I can set higher timeouts in Thanks! |
I think we will have to do a code fix to set the timeout. |
Where can we access the recordings from the sessions? |
Hello,
vmware_exporter
seems to be consuming very much RAM on our systems. Here's an example of memory consumption on VM that is running vmware exporter, drops in spikes are when I restart vmware exporter manually or systemd does that for me:vmware exporter is installed from pypi:
Running on latest CentOS version:
Metrics are collected from single vCenter instance that manages 26 ESXi hosts with about 550 virtual machines.
Please let me know if I can provide any additional information.
The text was updated successfully, but these errors were encountered: