Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Limit HTTP server's concurrency using semaphore. #1347

Closed

Conversation

juliusmh
Copy link

@juliusmh juliusmh commented Oct 16, 2024

Signed-off-by: Julius Hinze [email protected]

Closes: #1346

@dougbtv
Copy link
Member

dougbtv commented Oct 24, 2024

Overall, I'm in favor of it, and I like that it's parameterized, but I'd like to see that pods do eventually come up in a situation where there's more than the concurrency limit, I believe those pods would crash loop?

I think maybe we should have some testing for it, best case scenario: e2e tests that both capture the error and the success.

@dougbtv
Copy link
Member

dougbtv commented Oct 24, 2024

Tomo also suggested taking a look at the limiters on https://pkg.go.dev/golang.org/x/net/netutil

@juliusmh
Copy link
Author

juliusmh commented Nov 8, 2024

Overall, I'm in favor of it, and I like that it's parameterized, but I'd like to see that pods do eventually come up in a situation where there's more than the concurrency limit, I believe those pods would crash loop?

I think maybe we should have some testing for it, best case scenario: e2e tests that both capture the error and the success.

These pods would wait as the semaphore blocks, until the shim timeouts the requests. In this case the pod will fail and kubelet try again in the future. Note that the overall CRI timeout is 4 minutes, we could adjust the shims timeout to match that.

So, we should decide if we want to

  • fail the HTTP request if concurrency is too high, retry from shim?
  • or block while concurrency limit is exceeded ?
  • block using semaphore on listener level (LimitListener)?
  • or block inside the handler (this PR)?

@juliusmh juliusmh force-pushed the jmh/limit_concurrency branch from 4a4f556 to 685bb6b Compare November 12, 2024 10:37
@juliusmh
Copy link
Author

juliusmh commented Nov 12, 2024

@dougbtv fixed unit test & rebased. Implementation using the LimitListener you suggested is here: #1356

@s1061123
Copy link
Member

s1061123 commented Dec 5, 2024

I suppose using LimitListener, we don't need to introduce semaphore because LimitListener does the same things. Can we close it and proceed #1356?

@coveralls
Copy link

Coverage Status

coverage: 56.391% (-0.06%) from 56.454%
when pulling 685bb6b on juliusmh:jmh/limit_concurrency
into 781ecda on k8snetworkplumbingwg:master.

@juliusmh
Copy link
Author

juliusmh commented Dec 9, 2024

Closing in favor of #1356

@juliusmh juliusmh closed this Dec 9, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[OOMKilled] High memory consumption
4 participants