You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Mapeo eagerly connects to discovered peers on the local network. auth, config and blobIndex hypercores start replicating automatically, which means that hypercore manages the sharing of peer state (e.g. which blocks of each hypercore the peer has downloaded locally). However for the data and blob hypercores, they are not replicated until the user starts sync, but we still want the user to know whether there is data that needs sync. We share this info via "preHave" bitfields which are sent via extension messages on the project creator auth core (which is always connected) when the peer first connects. However if a peer receives more data - either by creating data like an observation, or by syncing with a different peer - after it has connected, then we have no messages to share the updated "preHave". This means that the sync state with a given peer can become "stale" - it can appear like there is nothing to sync with a peer, when, in fact, there is. I have not tested this, but I imagine it will be possible to replicate by:
Connect two peers from the same project and ensure they are synced
Disconnect the peers (e.g. by turning off wifi or force-closing the app)
Turn on wifi so the peers auto-connect
Go to the sync screen and check the peers have connected. The sync screen should show that there is nothing to sync.
On one of the peers, add some observations.
Return to the sync screen on the peer that did not add the observations. It should show that there is something to sync, but I think it will show that there is still nothing to sync.
Possible solutions
We could re-send the preHave messages every time a peer updates, or we could broadcast range messages, like hypercore uses internally. Either way we would probably need to do some kind of debouncing/batching to avoid sending messages to connected (but non-syncing) peers every time a block is downloaded to any hypercore.
I think the best solution might be:
For local appends, send a range message to connected peers
When sync with a peer completes, re-send preHaves to connected but non-syncing peers
We could always just send the preHaves for (1), with a slight inefficiency, but in real usage would be minor.
Tasks
[ ]
The text was updated successfully, but these errors were encountered:
Description
Mapeo eagerly connects to discovered peers on the local network.
auth
,config
andblobIndex
hypercores start replicating automatically, which means that hypercore manages the sharing of peer state (e.g. which blocks of each hypercore the peer has downloaded locally). However for thedata
andblob
hypercores, they are not replicated until the user starts sync, but we still want the user to know whether there is data that needs sync. We share this info via "preHave" bitfields which are sent via extension messages on the project creator auth core (which is always connected) when the peer first connects. However if a peer receives more data - either by creating data like an observation, or by syncing with a different peer - after it has connected, then we have no messages to share the updated "preHave". This means that the sync state with a given peer can become "stale" - it can appear like there is nothing to sync with a peer, when, in fact, there is. I have not tested this, but I imagine it will be possible to replicate by:Possible solutions
We could re-send the preHave messages every time a peer updates, or we could broadcast range messages, like hypercore uses internally. Either way we would probably need to do some kind of debouncing/batching to avoid sending messages to connected (but non-syncing) peers every time a block is downloaded to any hypercore.
I think the best solution might be:
We could always just send the preHaves for (1), with a slight inefficiency, but in real usage would be minor.
Tasks
The text was updated successfully, but these errors were encountered: