-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Review notebook cacheing and execution packages #3
Comments
>>> from tinydb.storages import JSONStorage
>>> from tinydb.middlewares import CachingMiddleware
>>> db = TinyDB('/path/to/db.json', storage=CachingMiddleware(JSONStorage)) |
scrapbook contains (in-memory only) classes to represent a collection of notebooks Scrapbook, and a single notebook Notebook. Of note, is that these have methods for returning notebook/cell execution metrics (like time taken), which they presumably store during notebook execution. They also provide methods to access 'scraps' which are outputs stored with name identifiers (see ExecutableBookProject/myst_parser#46) |
This is the link to the cacheing currently implemented by @mmcky and @AakashGfude: https://github.com/QuantEcon/sphinxcontrib-jupyter/blob/b5d9b2e77fdc571c4c718e67847020625d096d6d/sphinxcontrib/jupyter/builders/jupyter_code.py#L119 |
Another thought I had, is to look at |
|
I think this is the kinda thing that some more bespoke notebook UIs do. E.g., I believe that Gigantum.IO (a proprietary cloud interface for notebooks) commits notebooks to a git repository on-the-fly, and then gives you the option to go back in history if needed. I don't believe they do any execution cacheing, just content cacheing |
Thank you for creating this helpful resource! As I am on the search myself, here is another pointer (which I still need explore): dask.cache and cachey |
A place to discover and list other tools that do some form of notebook cacheing / execution / storage abstractions
The text was updated successfully, but these errors were encountered: