-
Notifications
You must be signed in to change notification settings - Fork 344
unit:1.34.1-php8.4 consumes more and more memory over time (possibly due to one unitd process) #1576
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I think the process in question is unit: router |
Hello Team, Sincerely appreciate some guidance if you can. I have made progress, went through all php scripts to make sure they are not leaking. Finally, adding
in application section of config.json seems to make a difference, as it keeps restarting php processes, bringing memory usage down. Though, when running overnight, it still does keep climbing, though at much smaller rate. Any thing else I can try on this regard? Additionally, I am getting many of the following events in logs phpApp | 2025/03/27 18:28:46 [info] 14#22 *2596 writev(43, 2) failed (32: Broken pipe) Interesting part is, I do not see them when running in local docker. This only happens when testing in aws ec2 docker. Looking into eb logs, I can see the following. /var/log/dockerMar 27 17:55:13 ip-172-31-68-85 docker[1965]: 2025/03/27 17:55:13 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98) /var/log/eb-docker/containers/eb-current-app/eb-stdouterr.logphpApp | 2025/03/26 23:48:18 [notice] 7#7 process 15 exited with code 0 Thanks again! |
OK, so it's the PHP application processes that are seemingly leaking memory... Probably not much we can do unless you are able to provide a reproducer. You could try isolating certain parts of your scripts, e.g. don't make database connections, don't parse file data etc, see if you can isolate what is causing the issue... |
@ac000 Thank you for the response, I have been working on this for the past one week, and It appears to me that the issue is caused by php sessions possibly. May be I am wrong, but let me share my findings. config.json
Dockerfile
docker-compose.yml
supervisord.conf
And finally, test.php
with this setup, when you start the container, it looks like this, Tests
|
Thanks for looking into this, good job on narrowing it down, I'll have a looksee... |
@ac000 thanks!! It did nt strike me until I sent my previous message, to do a test with Apache. Just out of curiosity. It looks like the memory usage in apache also shows a similar behaviour of climbing, though at slower rate. This could be because unit handles more requests than apache, not sure. May be this is normal behaviour? Just wanted to let you know of this finding. If you do come across anything that looks odd, would be great to know. Otherwise, it is still good to know it is sessions that is causing the swell, and possibly address memory limits accordingly. |
OK, so here's what I'm seeing... Starting unit, the php language module application starts out at around 5-6MB, looking at the processes Resident Set Size (RES) in top(1), e.g.
With your script with the two commented out lines, hitting it with 1 request results in
Hitting it another 10 times results in
and another 100 times
Lets go for another 1000 times...
and another 100, 000 times
With the session stuff enabled... At start
1 hit
+10 hits
+100 hits
+1000 hits
+10,000 hits
100,000 took a long time but even with just 10,000 hits you can see we are getting similar behaviour with and without the session stuff. It does look like we may be leaking something... Lets try an empty php script At start
1 hit
+10 hits
+100 hits
+1000 hits
+10,000 hits
So, similar behaviour again... I'll dig a little deeper. |
Interesting!! Here is some info from a test I did today morning that goes along with your observation.
|
Right, /tmp is usually on tmpfs which is backed by memory. Do you see the same issue if you have the sessions stored on a real fs, e.g. /var/tmp (assuming it's not just a symlink to /tmp). Having played a little with ASAN (Address Sanitizer) it's not showing any run-time leaks (we do seem to leak some memory at startup that isn't freed at shutdown), but yet the PHP application process does seemingly grow over time... next to see what valgrind(1) finds. |
I had done something similar already and had similar results, but for confirmiation, I did it again. Made the following changes to
config.json
Hope this is what you were looking for? And ran the same tests.
At this point,
3a. While I was on another window for a bit, and came back a minute or so later, there is a small jump in mem.
And phpApp at this point
While finishing this writeup, I checked back on the container, and interestingly, it is back to
This final jump back to starting point made me redo the test again. Same general behaviour, but this time I set a timer and kept looking at the docker stats.
And container goes to
|
I think it would be better to look at the unit processes memory usage rather than the container as a whole. The Kernel will aggressively use free memory (free memory is wasted memory, better to use it for caching). I think we do leak some memory, valgrind(1) certainly thinks so and testing seems to indicate so, however I don't really see anything particular with sessions... |
You are right, Sessions may not have anything to do with this. The only reason I picked it is because after so many days of going back and forth, with inconsistent findings, it was the one thing I could use to repeatably reproduce the behaviour. As you mentioned, I looked into the processes, And findings through weekend may confirm your observation. I disabled most of the session creating scripts, and let it run, exposed to copy of live traffic. The memory swells, but not much on the sessions folder. As you suggested I looked at the processes. I have other containers like redis/mysql etc, but they are running stable, so omitting them for brevity.
At this point, if I kill unit: router
Obviously, the memory difference does not correspond to session files, and killing router brings back mem use down, but not where it started. Again, This may not mean anything. Just putting it out there so it might help ask better questions? Let me know if you want me to do any tests |
Bug Overview
Hi,
I have a project using unit:1.34.1-php8.4. I am testing it in docker, along with redis, and mysql. Everything is working fine, except the php container memory usage keeps increasing.
Docker-compose.yml
Dockerfile
docker status start like below.
When it is exposed to traffic, the memory for phpApp slowly starts to rise. Say, it goes from 120MB at start, to ~400MB in 12 hours or so. And keeps going, to the point the container gets restarted.
This is with traffic, after some time
Trying to narrow down to what might be causing this, I have identified a process in the process list that shows notable cpu usage (3-4%, while the others are in the 1.x %). In the below screenshot, pid 15 is the one I am referring to.
Looking into this process, I could see a lot of entries, trying to access the base machine, with permission denied. Adding
privileged: true
to the docker-compose file for the phpApp got rid of this error, and I could see a bunch of entries like below
Note that, even when I was seeing all those Permission denied entries under PID15, the app is running fine, so it is not affecting the app. If I kill this process, the memory usage comes down instantly.
But another process spawns and starts doing the same thing.
Expected Behavior
Memory usage should remain more or less same for a constant load. But it keeps increasing.
Steps to Reproduce the Bug
I use docker-compose up command to start the containers on AWS EC2.
Environment Details
Additional Context
Any help is much appreciated. I can provide any additional info (config.json, supervisor.conf etc if needed).
The text was updated successfully, but these errors were encountered: