Replies: 1 comment
-
|
This was discussed in a team meeting, and while #306 is the best forward, here are some future points of research/thought/investigation:
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
In #306, we defined the details and interface relay to computing and validating relay volume as described in the V1 Utility Spec.
This discussion is not meant to be an extensive deep deep into v0 relay volume computation, but aims to be an extension of this thread in #306. Since the exact implementation is not needed immediately, and alternative solutions require more research, thought and discussion, this thread is starting in parallel.
V0
For a gentle introduction, see this tweet thread from Mike on the claim & proof lifecycle.
V0 uses Merkle Sum Index Trees to generate a Merkle Tree where the range of each leaf is guaranteed not to overlap. The upper range of each leaf is computed from the relay proof and the root is published on-chain via a claim transaction. Only a single pseudo-randomly selected leaf (and its merkle proof) is published 2 sessions later to probabilistically guarantee that the advertised volume was indeed handeled.
V1 - Current Approach
NOTE: The diagrams below were taken from #306
V0 is space inefficient when it comes to submitting many merkle proofs from different servicers in a block, though it is still a candidate if optimized appropriately.
An alternative solution is, assuming every relay is stored in a Servicer's local memory, is to use the following query to probabilistically prove some # of relays was performed and extrapolate it using an appropriate probability distribution.
sequenceDiagram title Steps 4-6 autonumber actor Service Node participant Internal State participant Internal Storage actor Fisherman loop Repeats Every Session End Service Node->>Internal State: GetSecretKey(sessionHeader) Internal State->>Service Node: HashCollision = SecretKey(govParams) Service Node->>Internal Storage: RelaysThatEndWith(HashCollision) Internal Storage->>Service Node: VolumeApplicableRelays Service Node->>Fisherman: Send(VolumeApplicableRelays) endV1 - Consideration for an alternative approach
Deterministicsolution instead of aprobabilisticsolution?Fishermanwill need to to unpack and verify the integrity (e.g. the App's signature) on each relay selected in order to avoid an attack where the hashes were being precomputed without relays being performed.V1 - Alternative Approach
NOTE: This idea below is still a WIP
Founders of Chia Network pioneered Proof of Sequantial Work which was used to developed Verifiable Delay Functions used by protocols like Solana, Harmony, Chia, etc...
Taking notes from the Solana Whitepaper
sequenceDiagram autonumber title Relay Volume Counter actor C as Client actor SN as Service Node participant IS as Internal Storage (Service Node) actor F as Fisherman C->>C: Generate SessionKey<br>(i.e. pseudo-random seed) C->>C: Initialize N=0 C->>SN: SessionKey loop Repeat N times - once per relay during session duration C->>+SN: SignedRelay SN->>SN: Validate & Process Relay SN->>SN: countHash=hashFunc(append(relayHash, countHash-1))<br>(using VDF) alt every M relays (e.g. 1000) SN->>IS: store(countHash) end SN->>-C: (countHash, SignedRelayResponse) C->>C: N += 1<br>update countHash end C->>F: (SessionKey, countHash, N) SN->>F: (SessionKey, [countHashes], M) F->>F: assert(M == N)<br> Verify(SessionKey, [countHashes])The pros of this solution is:
Beta Was this translation helpful? Give feedback.
All reactions