Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature: http retry #2333

Open
wants to merge 2 commits into
base: master
Choose a base branch
from
Open

Feature: http retry #2333

wants to merge 2 commits into from

Conversation

szuecs
Copy link
Member

@szuecs szuecs commented May 15, 2023

Feature: http retry in httpclient

fixes #1473

@szuecs
Copy link
Member Author

szuecs commented Jul 3, 2023

@szuecs szuecs force-pushed the feature/http-retry branch from 6b74ba9 to 5608ee7 Compare March 6, 2024 17:38
return c.Do(req)
}

if rc, err := req.GetBody(); err == nil {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this seems to work in all cases I try, but this only works for 3 different types of body creations https://github.com/golang/go/blob/master/src/net/http/request.go#L877-L881 as far as I understand.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

GetBody() has a problem: // Next line panics on TestClientRetryBodyHalfReader

@szuecs szuecs marked this pull request as ready for review March 13, 2024 23:18
@szuecs szuecs added the major moderate risk, for example new API, small filter changes that have no risk like refactoring or logs label Mar 13, 2024
@szuecs szuecs force-pushed the feature/http-retry branch from a834d90 to ea54f7e Compare March 14, 2024 22:26
@bjhaid
Copy link

bjhaid commented Mar 19, 2024

From reading the code it is unclear if it tries a new backend. If it doesn't, is it desirable for the new retry filter to attempt a different backend from the previously failed one?

@szuecs
Copy link
Member Author

szuecs commented Mar 19, 2024

@bjhaid yes in case of loadbalancer backend it would retry by "running" again the algorithm, so in case of roundRobin it would try the current "next". I guess it's fine to run the same algorithm again, because if you think about a single backend error. For example a broken node issue that makes a backend instance to a half broke state, that can respond to "/ready" but not handle most of the requests anymore. I would guess a "network" backend target would exactly do the same: It will run some lb algorithm in some load balancer in front of the application.
The retry executes https://github.com/zalando/skipper/pull/2333/files#diff-1d74870266949f1927a500e5f0002617239d3e3cd9bcceee6be014738e5e0f2bR1272 makeBackendRequest again, which contains the call to mapRequest() which will execute selectEndpoint which will execute the lb algorithm

Do you see an issue with this behavior?

@bjhaid
Copy link

bjhaid commented Mar 19, 2024

The retry executes https://github.com/zalando/skipper/pull/2333/files#diff-1d74870266949f1927a500e5f0002617239d3e3cd9bcceee6be014738e5e0f2bR1272 makeBackendRequest again, which contains the call to mapRequest() which will execute selectEndpoint which will execute the lb algorithm

Do you see an issue with this behavior?

Thanks for the clarification! It wasn't obvious to me from reading the changeset. This behavior makes sense to me. I am also interested in seeing this ship thanks for reviving it ❤️

@@ -125,9 +132,41 @@ func (c *Client) Do(req *http.Request) (*http.Response, error) {
req.Header.Set("Authorization", "Bearer "+string(b))
}
}
if req.Body != nil && req.Body != http.NoBody && req.ContentLength > 0 {
retryBuffer := skpio.NewCopyBodyStream(int(req.ContentLength), &bytes.Buffer{}, req.Body)
c.retryBuffers.Store(req, retryBuffer)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm, I do not see a corresponding Delete. Looks like this will leak buffers.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need this in the client? Why not let users use CopyBodyStream like we do in proxy?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The proxy code is about one request, but here it's about a client that could hundreds of requests and we need to get the right body for the retry. If we want to support a retry with body, we need to store it somewhere.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I saw we can use LoadAndDelete() instead of Load(), but this would make only a single retry possible, but maybe that's good enough for now.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need this in the client? Why not let users use CopyBodyStream like we do in proxy?

I thought it makes sense to provide it for simplicity as a user.

@szuecs szuecs force-pushed the feature/http-retry branch 3 times, most recently from 96e1c7d to 5aa5388 Compare March 26, 2024 15:15
@bjhaid
Copy link

bjhaid commented Aug 12, 2024

@szuecs is there anything I can do to help make progress on this PR.... I will love to see this ship.

@szuecs
Copy link
Member Author

szuecs commented Aug 13, 2024

@bjhaid sorry, I was in vacation and I just came back reading backlog of changes and emails and got a bit slurped into zone awareness. 😀
Let me see what is missing to make it happen.

A great way to help would be to tell what you expect from retry() and what kind of configuration you would like to pass to this filter.
Would you like to have a single retry or exponential backoff or maybe a token bucket style like Marc showed in https://brooker.co.za/blog/2022/02/28/retries.html ?

@bjhaid
Copy link

bjhaid commented Aug 13, 2024

A great way to help would be to tell what you expect from retry() and what kind of configuration you would like to pass to this filter.

something like retry(int N) where N is the number of times it is retried(capped at some reasonable value) with a default like 3.

@szuecs szuecs force-pushed the feature/http-retry branch from 5aa5388 to 46a8ce4 Compare August 14, 2024 09:45
szuecs added 2 commits August 19, 2024 11:23
feature: retry() filter

Signed-off-by: Sandor Szücs <[email protected]>
Signed-off-by: Sandor Szücs <[email protected]>
@szuecs szuecs force-pushed the feature/http-retry branch from 46a8ce4 to 0a27cce Compare August 19, 2024 09:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
major moderate risk, for example new API, small filter changes that have no risk like refactoring or logs
Projects
None yet
Development

Successfully merging this pull request may close these issues.

retry filter
3 participants