-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
draft: space state dag #19
base: main
Are you sure you want to change the base?
Conversation
@@ -0,0 +1,37 @@ | |||
# Abstract | |||
|
|||
Currently deployed w3up protocol implementation maintains state of the (user) space in the dynamo db, which is at odds with a notion of user space as they can not alter state locally without active connection to our service. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What data items does state of space consist of?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I might be forgetting something by top of my head is:
- List of Shards (cars / blobs) stored.
- List of uploads (uploaded content DAG root) stored.
- List of delegations (UCAN delegated to specific DID).
- List of subscriptions (What account is billed for the usage in this space)
I think this most of it but we also would like to move invocations and receipts on the resource also there.
|
||
### Content Replication | ||
|
||
Representing space as a [UnixFS] directory of user content or key value store like [Pail] may seem like a non-brainer, but it introduces major challenge in terms of sync, beacuse: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does a user always have only one [UnixFs] directory, or can they have multiple?
Does this map onto s3-like representation? Meaning: does a directory behave like an s3-bucket and blobs are like objects in the bucket?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't have this, so it's more of a if we used UnixFS dir or Pail this is what the constraints would be. In other words I design of the state DAG is something we need to figure out and then choose the right abstraction for it.
My hypothesis is that we could do it with a flat KV.
|
||
Represent entries (blob, upload, etc...) via signed service commitments. e.g. every `blob/add` would lead to `/blob/${multihash}` key mapped to receipt issued by a service for that invocation. That would allow client to list all blobs by iterating receipts and deriving their state, some entries may have receipts with effects that are still pending and some that have service location commitments. | ||
|
||
ℹ️ It is worth calling out that while user can make arbitrary key value pairs they can not create receipts issued by the service and there for can not represent some content available without uploading and getting receipt neither they can go over storage capacity as service will not sign a receipt allowing it. This creates a system where user is completely in charge of updating state as they please, yet service is only going to recognize and uphold stored commitments it issued. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the Prolly Tree stored in one location, or is it replicated to multiple locations if the user is replicating storage of their data?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I imagine we will have a replica of the prolly tree and clients will have local replicas that they sync with ours. Obviously others could also host replica and users could sync with them also. If we end up running our system across many nodes we may also start syncing across nodes
No description provided.