-
-
Notifications
You must be signed in to change notification settings - Fork 874
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add handling of 503 Service Unavailable retries #1713
Conversation
* fixes call to issue_type_by_name with missing project id
* and made the existing unit test generic
for more information, see https://pre-commit.ci
suggested_delay = ( | ||
-1 | ||
) # Controls return value AND whether we delay or not, Not-recoverable by default |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
right now this comment doesn't quite make sense
renaming this var to recovery_delay
would make it much more readable
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what would you change the comment to? (it makes sense to me but I'm ok to add more meaning to it)
I kind of agree with recovery_delay
, but then it won't eventually be the recovery delay because of the max limit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
then you go with -1 right? anyhow, this was just a minor suggestion, don't worry about it
if response.status_code in recoverable_error_codes: | ||
retry_after = response.headers.get("Retry-After") | ||
if retry_after: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this looks like regression: 429 should be still always retried
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
429s are retried, just with a different delay as per previous behaviour -- see line 322
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for this high quality contribution
Hi,
This is a follow up of #925 and associated PR #1364
As mentioned in https://developer.atlassian.com/cloud/jira/platform/rate-limiting/ , retries should not be limited to 429 responses.
503 responses are slightly different but should follow the same retry logic.
This PR aims at implementing Atlassian's pseudo code written in that page while keeping the already existing behaviour for 429s.
Unit tests have been updated accordingly.
Note: As a matter of fact, 503 responses were received from Jira Cloud (08-2023) and the library wasn't able to handle these. Hence this PR.