-
-
Notifications
You must be signed in to change notification settings - Fork 97
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Workers are selected sequentially and request processing may hang while free workers are available #100
Comments
Thank you for reporting. Woo takes a round-robin approach assigning jobs to each worker, because it's simple to implement, easy to understand, and affects performance less in most cases. |
Are there other scheduling methods I can try to switch to? |
Nothing yet. |
I beilieve the problem can be not only heavy jobs inside an application request handler, but also some malicious clients, doing Slow HTTP DOS Attack. I didn't check this yet, but quite sure the problem will be the same. |
Alright. |
I have no idea what kind of statistics could help here. |
It is for the situation that you mentioned in the previous comment, while you don't seem to be sure what is going on.
|
I think that during the Slow HTTP DOS Attack Woo will gave up sooner than other webserver with more advanced worker scheduler. Imagine, you have 100 workers. Usually, Slow HTTP attack can be made with 100 slow connections, whereas with Woo it will be enough to make one such connection. |
I'll try to find some time to test Woo and Hunchentoo on this kind of attacks. |
Slowloris test resultsToday I've tested how Woo and Hunchentoot behave under the load during a Slowloris DoS attack. I found that Woo does is vulnerable in lesser form than Hunchentoot. But anyway, it is possible to put server down. At least in my server configuration it required about 1000 simulteneus slow connections whereas Hunchentoot lay down after the 100 (because it's thread pool is limited by 100 workers by default). Conclusion:Hunchentoot and Woo both vulnerable, but Woo requires 10 times more connections from attacker (however this is not a problem, because Slowlories does not require big bandwidth). Probably some techniques can be applied to prevent this kind of attack on Woo. Wikipedia article lists a few of such techniques. Slow workers test resultsAlso, I understood that Slowloris attack is not the problem I've started this issue for. I've started it because Woo hangs when some request processing requires significant amount of time. Here it still behaves worse than Hunchentoot. To test this problem, I've created an app which sleeps 1 second before each response. Woo was started with In this configuration server is able to serve only 10 requests per seconds. This means that with concurrency 20 some clients will wait and average response time should be about 2 seconds instead of 1. Test shows expected performance:
When
ConclusionFor production we can protect ourself from a large variance in the execution time, by applying a timeout. Also a reasonable number of threads should be given as NotesFor Slowloris attack I've used this tool running in a docker: This is the code I've used to start the server: Here is the full video record of my investigation: |
@fukamachi I think this issue should be closed, because it is related to a scheduling issue and can be solved by placing a timeout on request processing. But probably, you'll want to mitigate a Slowlori attacks and a new issue should be created instead. |
This leads to a situation when we finally encounter a busy worker and are waiting for it while other free workers are available.
How to reproduce
(sleep 15)
on/sleep
URL and respond immediately on/
.:worker-num 4
.curl localhost:8080/sleep
curl localhost:8080/
. First three attempt will return immediately, but fourth will wait for the worker processing request from the third step.Expected behaviour
Free workers are reused, busy are not.
Probably this issue somehow related to this old discussion about performance and queues: Better worker mechanism .
The text was updated successfully, but these errors were encountered: