-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Document storage explanation #25
base: main
Are you sure you want to change the base?
Document storage explanation #25
Conversation
Limited to postgres and SQLite or only demonstrated on these two? Are any capabilities used in the interface unique to these two and not possible using other SQL servers, such as MySQL, MariaDB, or Oracle? |
We use the SQLAlchemy library, which abstracts over a variety of SQL dialects. Those two (PG and SQLite) are the only ones we test against. They were chosen because at present they are generally considered the most robust in their respective domains of client-server and embedded relational databases. Other SQL dialects could in principle be supported if they have sufficient support for JSON, particularly indexing on keys in JSON columns. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great stuff! I can't wait to try this back home.
For the first time in the ten-year history of the Bluesky project, the Bluesky | ||
core developers will soon recommend a change in how data and metadata from | ||
Bluesky documents should be stored. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd prefer an absolute time like in the first quarter of 2025
over will soon recommend
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Or maybe refer to the last paragraph of this document.
non-optimal for _batch reads_ and _random access_. These are critical | ||
shortcomings in a data store. | ||
|
||
In order to access a portion data from MongoDB as a table or an array, we |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In order to access a portion data from MongoDB as a table or an array, we | |
In order to access a portion of data from MongoDB as a table or an array, we |
effectively take a "transpose" of the Event documents to build a columnar | ||
representation of the data. The implementation is fairly complex, and thus | ||
expensive to debug and maintain. And the operation imposes a performance cost | ||
that becomes noticeable beyond ~100 events. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A benchmark demonstrating the benefits of the SQL database vs. the MongoDB for data retrieval would be nice. I assume it's not too hard to compare both with identical dataset being written to both backends and fetched from either. External factor need to be considered, like comparing a local MongoDB against a remote/central PostgreSQL will be unfair due to latencies.
2. Ingest them into the new storage, just as if they were a live experiment. | ||
3. Stream documents from the new storage and validate semantic fidelity. | ||
|
||
Of course, it possible to test this offline and evaluate the performance and |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Of course, it possible to test this offline and evaluate the performance and | |
Of course, it will be possible to test this offline and evaluate the performance and |
This is a rough draft of a statement from "the project" about the vision for moving Bluesky document storage to a layout better optimized for data access.