You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently we use up quite a lot of API requests every time that we scrape the GitHub data. This means that if we update the site too many times, we'll hit our limit and have to wait a day. We should find a way to scrape this data in a way that uses fewer API calls. For example, by storing the data in a persistent cloud bucket and simply updating that data rather than scraping from scratch each time.
The text was updated successfully, but these errors were encountered:
Currently we use up quite a lot of API requests every time that we scrape the GitHub data. This means that if we update the site too many times, we'll hit our limit and have to wait a day. We should find a way to scrape this data in a way that uses fewer API calls. For example, by storing the data in a persistent cloud bucket and simply updating that data rather than scraping from scratch each time.
The text was updated successfully, but these errors were encountered: