You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
Every time you make a request for some data that you've already done, it re-downloads it.
Describe the solution you'd like
A locally stored database cache. Ideally this would be allowed to have a maximum size, and therefore we would benefit from an ORM so that we easily do bulk removes, updates, and inserts. The data model should be based only on what's available via the bulk CSV downloads. The JSON service queries are an extra goodie which could be mapped across later. So SQLAlchemy would be an additional dependency. SQLite would be the obvious choice for the cache itself.
The default behaviour of whether to rely solely on the cached results, or update from Groundwater Data, should be configurable, but an easy rule of thumb would be that for some queries like water_levels and salinities, an update should be run if the query is on a new day. Others like well_summary, and so on, should probably be more like a month. In any case: add a keyword argument to the query methods: update_cache=False/True/"auto".
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem? Please describe.
Every time you make a request for some data that you've already done, it re-downloads it.
Describe the solution you'd like
A locally stored database cache. Ideally this would be allowed to have a maximum size, and therefore we would benefit from an ORM so that we easily do bulk removes, updates, and inserts. The data model should be based only on what's available via the bulk CSV downloads. The JSON service queries are an extra goodie which could be mapped across later. So SQLAlchemy would be an additional dependency. SQLite would be the obvious choice for the cache itself.
The default behaviour of whether to rely solely on the cached results, or update from Groundwater Data, should be configurable, but an easy rule of thumb would be that for some queries like water_levels and salinities, an update should be run if the query is on a new day. Others like well_summary, and so on, should probably be more like a month. In any case: add a keyword argument to the query methods: update_cache=False/True/"auto".
The text was updated successfully, but these errors were encountered: