You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Some NFT collections specify their metadata URIs in a wrong way e.g. by returning a URL for an index of all the JSON files for every token (for example https://freepunks.xyz/json/) instead of a URL to the specific token file.
This causes the service to respond correctly with errors like MetadataSizeExceededError. However, since a collection can have thousands of tokens, this URL fetch is performed thousands of times all with the same result.
We could figure out a way to remember these faulty URLs and automatically mark token jobs as failed without fetching anything if they use some of these incorrect URLs again.
The text was updated successfully, but these errors were encountered:
In theory errors like MetadataSizeExceededErrorcould be transient / retryable. Maybe something like exponential backoff + maximum amount of retries? Sounds like a new table would be responsible for storing this, like table_name(url: text, last_try: date, attempts: int), and then augment job parameters with a URL join against this table?
Some NFT collections specify their metadata URIs in a wrong way e.g. by returning a URL for an index of all the JSON files for every token (for example https://freepunks.xyz/json/) instead of a URL to the specific token file.
This causes the service to respond correctly with errors like
MetadataSizeExceededError
. However, since a collection can have thousands of tokens, this URL fetch is performed thousands of times all with the same result.We could figure out a way to remember these faulty URLs and automatically mark token jobs as failed without fetching anything if they use some of these incorrect URLs again.
The text was updated successfully, but these errors were encountered: