-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Lake] Issue-481: Adding Payouts to the Data Factory #535
Conversation
…f no timestamp is present
… how transform handles datetime
…end and test_load_filtered.
…eng tests that are enabled are passing.
…_df() to verify that its working as intended. I believe kraken data is returning null at the moment.
* Towards #232: Refactoring towards ppss.yaml part 3/3 * move everything in model_eng/ to data_eng/ * Fix #352: [SW eng] High DRY violation in test_predictoor_agent.py <> test_predictoor_agent3.py * Deprecate backend-dev.md (long obsolete), macos.md (obsolete due to vps), and envvars.md (obsolete because of ppss.yaml). * Rename BaseConfig to web3_pp.py and make it yaml-based * Move scripts into util/, incorporate them into pdr cli, some refactoring. * revamp READMEs for cli. And, tighten up text for getting OCEAN & ROSE * Deprecated ADDRESS_FILE and RPC_URL envvars. * deprecate Predictoor approach 2. Pita to maintain Co-authored-by: trizin <[email protected]>
* Update check script CI * Update cron topup * Workflow dispatch * Nevermind, revert previous commit * Run on push to test * Pass ppss.web3_pp instead of web3_config * Don't run on push
…il; get linters to pass
* Add main.py back * Black * Linter * Linter * Remove "switch back to version v0.1.1" * Black
How fixed: use previous ascynio version. Calina: Asyncio has some known issues, per their changelog. Namely issues with fixture handling etc., which I believe causes the warnings and test skips in our runs. They recommend using the previous version until they are fixed. It is also why my setup didn't spew up any warnings, my asyncio version was 21.1. https://pytest-asyncio.readthedocs.io/en/latest/reference/changelog.html
* Fix web3_config.rpc_url in test_send_encrypted_tx * Add conftest.py for system tests * Add system test for get_traction_info * Add system test for get_predictions_info * Add system test for get_predictoors_info * Add "PDRS" argument to _ArgParser_ST_END_PQDIR_NETWORK_PPSS_PDRS class * Fix feed.exchange type conversion in publish_assets.py * Add print statement for payout completion * Add system level test for pdr topup * Add conditional break for testing via env * Add conditional break for testing via env * Black * Add test for pdr rose payout system * System level test pdr check network * System level test pdr claim OCEAN * System level test pdr trueval agent * Remove unused patchs * Fix wrong import position in conftest.py * Remove unused imports * System level test for pdr dfbuyer * System level tests for pdr trader * System level tests for publisher * Rename publisher test file * Add conditional break in take_step() method * Update dftool->pdr names in system tests * Refactor test_trader_agent_system.py * Add mock fixtures for SubgraphFeed and PredictoorContract * Add system tests for predictoor * Black * Refactor system test files - linter fixes * Linter fixes * Black * Add missing mock * Add savefig assertion in test_topup * Update VPS configuration to use development entry * Patch verify_feed_dependencies * Refactor test_predictoor_system.py to use a common test function * Refactor trader approach tests to improve DRY * Black * Indent * Ditch NETWORK_OVERRIDE * Black * Remove unused imports
* Add publisher feeds filtering.
Code Climate has analyzed commit 2ffb490 and detected 3 issues on this pull request. Here's the issue category breakdown:
The test coverage on the diff in this pull request is 98.4% (50% is the threshold). This pull request will bring the total coverage in the repository to 95.2% (0.1% change). View more on Code Climate. |
} | ||
""" | ||
% (start_ts, end_ts, asset_id) | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure why this has to be looped... it should already be a list[str] of addresses that can be embedded into the query.
We should be able to use a single where clause for subgraph, which may perhaps reduce the search/query time (and it's easier to read). Example:
That generates the following query for multiple assets
query {
predictPayouts (
first: 10
skip: 1
where: {
{
timestamp_gte: 1622547000,
timestamp_lte: 1622548800,
prediction_contains: ['0x18f54cc21b7a2fdd011bea06bba7801b280e3151', '0x33334cc21b7a2fdd011bea06bba7801b280e3151']
}
}
) {
id
timestamp
payout
prediction {
user {
id
}
slot {
id
predictContract{
id
token{
name
}
}
}
}
}
}
as defined in this test example
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I took another look at your comment, @idiom-bytes. The "aaset_ids" argument is a list of strings that represent the IDs of the contracts we want to retrieve. Unfortunately, the "predictPayouts" query's "where" clause does not have a contract filter. We could use a nested query, like the one below, but our subgraph system does not allow it:
where: {
prediction_: {
slot_: {
predictContract_in: ["0xfeed1", "0xfeed2"]
}
}
}
However, the "prediction_contains" argument searches for text within prediction IDs. It only allows strings. The structure of a prediction ID is as follows:
{contract address}-{slot}-{user}
Therefore, if the contract address is present in the prediction ID, we can retrieve it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looked into this and I agree with @kdetry, we can't just pass the list of addresses with the way subgraph it's structured right now
"token": payout["prediction"]["slot"]["predictContract"]["token"][ | ||
"name" | ||
], | ||
"slot": int(payout["id"].split("-")[1]), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please make sure that we're retrieving all required data to generate the insights we want. Payout event is where a lot of the stake/revenue/prediction
values are provided. Example:
- predictedValue
- prediction.slot.revenue
- prediction.stake
- prediction.slot.roundSumStakesUp
- prediction.slot.roundSumStakes
So, we need to fetch this additional data such that we can update our local records.
We can then use the data from the payout event to update our lake/ pdr_predictions
and pdr_slots
tables.
query{
predictPayouts(...){
prediction{
slot {
id
predictContract {
id
}
slot
status
revenue
roundSumStakesUp
roundSumStakes
}
user {
id
}
stake
}
payout
predictedValue
trueValue
timestamp
}
}```
I have tried to put some of this in the `lake.html` file, but that's not an exhaustive set of requirements. All data relating to `stake/revenue/prediction` outcome should be retrieved.
To better understand what each contract event provides, you can check the contract for the event, or the subgraph event handler to see what data the subgraph updates/yields as a result of the contract event.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
#447
I have followed the BRONZE_FOR_PAYOUTS table on the epic so I didn't add these values.
I am going to add them asap.
pdr_backend/lake/gql_data_factory.py
Outdated
@@ -43,6 +47,10 @@ def __init__(self, ppss: PPSS): | |||
) | |||
contract_list = [f.lower() for f in contract_list] | |||
|
|||
# For debugging | |||
# t_contract_list = [f.lower() for f in contract_list] | |||
# contract_list = [t_contract_list[0], t_contract_list[1]] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to keep this commented lines?
I close this PR, it is replaced with the following ne: |
Fixes #481
Changes proposed in this PR: