-
Notifications
You must be signed in to change notification settings - Fork 1
Description
The current Spark implementation assumes the on-chain PeerID found in Filecoin.StateMinerInfo
response is the same as the IPNI index provider PeerID, see https://github.com/CheckerNetwork/FIPs/blob/frc-retrieval-checking-requirements/FRCs/frc-retrieval-checking-requirements.md#link-on-chain-minerid-and-ipni-provider-identity
Curio uses a different PeerID for on-chain state and index provider advertisements. To support Spark, Curio is maintaining the MinerID->IndexProviderPeerID mapping in a smart contract. (There is a single smart contract shared by all miners using this new mechanism.)
We need to improve Spark to support both flavours of MinerID->IndexProviderPeerID lookups.
Resources:
- Smart contract documentation: https://github.com/filecoin-project/curio/blob/main/market/ipni/spark/sol/README.md
- https://filfox.info/en/address/0x14183aD016Ddc83D638425D6328009aa390339Ce?t=3 is the final contract version
- Curio PR implementing the smart contract + Curio code updating the state: feat: spark contract filecoin-project/curio#377
Places where to implement this new mechanism:
- spark-deal-observer --> feat: add index provider library spark-deal-observer#143
- spark-checker
- spark-spot-check
Related:
Open questions:
- How do we want to interact with the smart contract - are we going to use Ethers.js as we do elsewhere or implement something custom?
- If Ethers.js: how to obtain ABI.json file to initialise Ethers.js smart contract client.
- Are we okay to share the implementation using the current copy-n-paste approach or do we need to figure out how to share miner-peer-id lookup code first?
- How should we check the two flavours - one after other (which one to check first?) or in parallel?
- How can we make the RPC API call querying the smart contract state reasonably cheap in terms of Glif Compute Units spent? (Can we use the same mechanism as we use for
Filecoin.StateMinerInfo
queries?) - Should we verify the signature of the mapping entry? I think we should verify, otherwise a malicious SP can delegate all Spark retrieval checks to a different SP by posting a mapping from their MinerID to a PeerID of somebody else. OTOH, this is not critical and can be moved to a follow-up task.
Tasks:
- Write a design proposal answering the questions above.
- Convert the proposal into a list of implementation (sub)tasks.
- Add the tasks to this list and implement them :)
- Set Abort Signal timeout to 60_000
Metadata
Metadata
Assignees
Labels
Type
Projects
Status