Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature planning: Distributed sharding #542

Open
jchristgit opened this issue Apr 19, 2024 · 0 comments
Open

Feature planning: Distributed sharding #542

jchristgit opened this issue Apr 19, 2024 · 0 comments

Comments

@jchristgit
Copy link
Collaborator

The building blocks for distributed caching, and an mnesia-based implementation
which is more like a finished house frame than a building block, are all there.

It is time for the next step. We need to distribute our sharding. As a start, we
should introduce a way to start and stop shards dynamically. I imagine an
approach such as one process per shard seems the most sane and manageable, so we
will want a function that allows to bring up a shard and a function that brings
down a shard, and we need a way to see all shards across all nodes in the
system. Potentially, we can use Erlang's :global or :pg for this.

Once we have functions to bring up and down shards, we need to figure out
whether we can (or should?) automatically redistribute shards as nodes go up and
down inside the cluster. If we decide on implementing this, we should utilize
the seq parameter to allow us to resume without losing events. We will need to
track this somewhere.

I presume this issue serves best as a "planning" stage ticket, and we should
create separate tickets for each individual supposed function. Perhaps we could
use a milestone to track all issues related to it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: 1.0
Development

No branches or pull requests

1 participant