Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

remember and avoid URLs that failed for non-retryable reasons #85

Open
rafaelcr opened this issue Jan 26, 2023 · 1 comment
Open

remember and avoid URLs that failed for non-retryable reasons #85

rafaelcr opened this issue Jan 26, 2023 · 1 comment

Comments

@rafaelcr
Copy link
Collaborator

rafaelcr commented Jan 26, 2023

Some NFT collections specify their metadata URIs in a wrong way e.g. by returning a URL for an index of all the JSON files for every token (for example https://freepunks.xyz/json/) instead of a URL to the specific token file.

This causes the service to respond correctly with errors like MetadataSizeExceededError. However, since a collection can have thousands of tokens, this URL fetch is performed thousands of times all with the same result.

We could figure out a way to remember these faulty URLs and automatically mark token jobs as failed without fetching anything if they use some of these incorrect URLs again.

@zone117x
Copy link
Member

zone117x commented Feb 2, 2023

In theory errors like MetadataSizeExceededError could be transient / retryable. Maybe something like exponential backoff + maximum amount of retries? Sounds like a new table would be responsible for storing this, like table_name(url: text, last_try: date, attempts: int), and then augment job parameters with a URL join against this table?

@github-project-automation github-project-automation bot moved this to 🆕 New in API Board Jul 19, 2023
@smcclellan smcclellan moved this from 🆕 New to 📋 Backlog in API Board Jul 21, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: 📋 Backlog
Development

No branches or pull requests

2 participants