-
-
Notifications
You must be signed in to change notification settings - Fork 83
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support max-requests
and max-requests-jitter
config parameters
#34
Comments
This will require quite an effort, as the main process – which right now is quite simple – should behave like a real arbiter and effectively manage child process continuously. Probably I can take a look at what gunicorn does, but this for sure will require some time. |
Similar/related thing is having a request timeout after which to kill a process if it did not respond |
I'm not sure Granian should handle all of this complexities. |
This also makes sense, but i'm not aware of the tradeoffs. I created the ticket thinking of the feature that has made me choose gunicorn over others like Daphne even. In any reasonably complex web app(like enterprise!) memory leaks seem to be a big issue and hence this seemed like an must-have. This is obviously not true for every project, and perhaps Granian works perfectly fine for other use cases. Further stream of thought As @snuderl pointed out, request timeout is a thing also because as the team grows devs tend to add all kinds of things like network requests without timeout. At that point stray endpoints making these requests will just hang! and hence you're forced to want a timeout or hunt down each network request and hack/patch your way sometimes to add a timeout. At work(team of ~100 devs) these are the flags that keep the app running smoothly without too many problems.
The timeout of 28 seconds is just to keep it lower than 30(which was the Heroku hard limit), so we can receive Python traceback errors. The initial idea was to progressively lower it and force all endpoints to "improve" performance over time till reaching a sane value. |
timeouts, and backpressure (max req) sound like features for Emmett, not for Granian. |
This also prevents autoscalers to scale down when the peak is over. |
🤔 can you elaborate on this? 'case I can't see any correlations between an autoscaler behaviour and how a process manages its own children ones. |
If workers don't release the memory they took, and the autoscaler spawned more pods because of memory consumption, the number of needed pods will never go down. |
One of the common problems with Django apps is memory leaks. The most common suggestion is to allow the webserver to cycle processes after serving some fixed number of requests. This bypasses memory issues by just returning used up memory and start fresh.
Perhaps you can consider supporting something similar to gunicorn early in the design.
This is also supported in uwsgi, one of the popular alternatives to gunicorn.
Similar issues are solved in celery
worker_max_tasks_per_child
since i think it's fairly hard to track down and fix memory issues in larger Python projects with hundreds of dependencies.References
Funding
The text was updated successfully, but these errors were encountered: