-
Notifications
You must be signed in to change notification settings - Fork 791
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory leak in simpleclient_vertx4 #861
Comments
Interesting, thanks for reporting. We recently had a performance issue with The reason is that Which Java version are you using? |
Thanks for your response @fstab. We are using Java 8 itself. The byte[] and char[] is taking all the memory. The issue is in the following line, which I think tries to convert each metrics to char array before sending the response: public void write(char[] cbuf, int off, int len) throws IOException { |
@jyotipatel How big is your metrics response in bytes? How many simultaneous scrapes are occurring? |
I just checked the response size, it's around 8TB and the scrapes are occurring every 5 seconds. This size can be reduced to 4TB as I just found out an issue in metrics generation logic. But do we know if at least 4TB can be supported or not? |
@jyotipatel I feel this isn't a memory leak, but a side effect of the fact that...
@fstab thoughts? |
I can try reducing the size further for point 1 and also increase the scrap interval for point 3. What about the 2nd point? Any way to handle this? @dhoard |
For point 2, you could write your own Note: The response is buffered to prevent a partial response from being returned if there is an exception during the scrape/response generation. |
I revisited the size, it is not in TB, it's 10 MB of metrics. Sorry for the confusion. |
I have a similar observation. Following are the details -
Problem is not reliably reproducible and usually recovers on application restarts. The metric size is not more than a few MBs. We have recently migrated from prometheus client 0.2.0 to 0.16.0. @fstab / @dhoard Any thoughts what could be causing this behaviour ? @jyotipatel Were you able to find any workaround ? |
Hey, thank you all for the detailed report. Just FYI: We are currently busy preparing the 1.0.0 release, which is due next week. However, So I might put this on hold for 2 weeks or so until 1.0.0 is done, but I'll get back to this as soon as we have the initial 1.0.0 done and start tackling |
@fstab Thank you for the response. I fully understand that you are occupied preparing 1.0.0 release. However, Would appreciate if you can have a quick look and suggest any potential workaround or recommendations to reproduce it on demand , which can be tried out in the meantime. |
We are observing continuous increase in heap memory due to MetricsHandler.java. This is causing frequent GC trigger with increase in number of metrics. Attaching the heap allocation screenshot.
Following is the code snippet of vertx routing context:
proxyRouter.route("/metrics").handler(new MetricsHandler());
There are around 23000 metrics in this API response.
The heap size allocated for the young generation data is 5GB
The text was updated successfully, but these errors were encountered: