Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feat/lp v2 use gateway queue #1943

Merged
merged 24 commits into from
Aug 5, 2024
Merged

Feat/lp v2 use gateway queue #1943

merged 24 commits into from
Aug 5, 2024

Conversation

cdamian
Copy link
Contributor

@cdamian cdamian commented Aug 1, 2024

Description

Use the new Gateway Queue pallet in LP and LP gateway.

Changes and Descriptions

  • Use pallet-liquidity-pools-gateway-queue for queueing inbound and outbound message in the LP gateway.
  • Add new handler traits for inbound and outbound messages that are implemented by LP and LP gateway, as follows:

MessageQueue trait is implemented by the LP gateway queue and used by the LP gateway to enqueue both inbound and outbound messages.
MesageProcessor trait is implemented by the LP gateway and used by the LP gateway queue when processing queued messages.
OutboundMessageHandler trait is implemented by the LP gateway and used by LP when sending an outbound message.
InboundMessageHandler trait is implemented by LP and used by the LP gateway when processing inbound messages.

Checklist:

  • I have added Rust doc comments to structs, enums, traits and functions
  • I have made corresponding changes to the documentation
  • I have performed a self-review of my code
  • I have added tests that prove my fix is effective or that my feature works
  • Address the benchmark weight suggestion here
  • Skip the LP gateway queue in as many integration tests as possible.
  • Replace the fudge runtime with the normal runtime in the integration tests, where possible.

@cdamian cdamian marked this pull request as ready for review August 1, 2024 15:24
@lemunozm
Copy link
Contributor

lemunozm commented Aug 1, 2024

To track the history, previous Pr with the view of this can be found: #1937

@@ -243,32 +204,6 @@ pub mod pallet {
pub type RelayerList<T: Config> =
StorageDoubleMap<_, Blake2_128Concat, Domain, Blake2_128Concat, DomainAddress, ()>;

#[pallet::storage]
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed these.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note: Then we need to write a migration to kill those storages. Can be done in a later PR.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, let's do that separately then.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just curious, if the storage is totally empty, do we need to remove it then? I think only OutboundMessageNonceStore need to be removed

@cdamian cdamian requested a review from wischli August 1, 2024 19:42
@cdamian cdamian force-pushed the feat/lp-v2-use-gateway-queue branch from 9ac3c61 to 967e86c Compare August 1, 2024 19:43
Copy link

codecov bot commented Aug 1, 2024

Codecov Report

Attention: Patch coverage is 60.86957% with 36 lines in your changes missing coverage. Please review.

Project coverage is 47.70%. Comparing base (7070f0f) to head (e164791).

Files Patch % Lines
pallets/liquidity-pools-gateway/src/lib.rs 65.00% 14 Missing ⚠️
pallets/liquidity-pools/src/hooks.rs 0.00% 9 Missing ⚠️
pallets/liquidity-pools-gateway/queue/src/lib.rs 53.33% 7 Missing ⚠️
pallets/liquidity-pools-gateway/src/message.rs 0.00% 3 Missing ⚠️
libs/types/src/domain_address.rs 0.00% 2 Missing ⚠️
pallets/liquidity-pools/src/lib.rs 66.66% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1943      +/-   ##
==========================================
- Coverage   47.84%   47.70%   -0.14%     
==========================================
  Files         184      184              
  Lines       13038    12969      -69     
==========================================
- Hits         6238     6187      -51     
+ Misses       6800     6782      -18     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Contributor

@lemunozm lemunozm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I really think this is a super great improvement in our database in several aspects. Love it! Thanks for your work on this @cdamian

I left some minor NIT things and one major related to the weights, that IMO should be fixed before merging.

libs/types/src/domain_address.rs Show resolved Hide resolved
@@ -28,6 +28,7 @@ mod benchmarks {
#[benchmark]
fn process_message() -> Result<(), BenchmarkError> {
let caller: T::AccountId = account("acc_0", 0, 0);
//TODO(cdamian): Add a way of retrieving the "heaviest" message.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reminder. But I would leave it as it is in this PR. I need to know how to fit this with the batch solution. It will need another thought

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

BTW, being defensive means that we can avoid these benchmarks I think. So there is no longer an issue here. We can even remove the benchmarks & weights from this 🤔

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm OK with this, @wischli?

Comment on lines +540 to 543
match T::InboundMessageHandler::handle(domain_address, message) {
Ok(_) => (Ok(()), weight),
Err(e) => (Err(e), weight),
}
Copy link
Contributor

@lemunozm lemunozm Aug 2, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The computation of T::InboundMessageHandler::handle(domain_address, message) is not counted in the weight and it should. Note that the inner computation of T::InboundMessageHandler can be a lot. i.e. if the message is something investment-related it can imply changes in foreign investment, opening swap orders, investments, etc...

Nevertheless, there is no easy solution here. I would add manually an extra weight. I would say 200_000_000 and 4096 of PoV can be enough for all cases by now. Interesting in your thoughts @wischli

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would add that extra weight to the 2 extrinsics of the LP gateway queue that can end up processing this message when calling MessageProcessor::process. The only other place where this is called is on_idle but in that case we use the weight that's returned.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

Copy link
Contributor

@lemunozm lemunozm Aug 2, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But we're still skipping the weight on the on_idle, right?

on_idle uses the weight from T::MessageProcessor::process, which only returns Weight::from_parts(0, T::Message::max_encoded_len() as u64). We're not taking into account the processed message weight. I think we should add the above weights to process_inbound_message as well. To get on_idle to compute correctly all weights

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, right, for inbound messages we only return the MaxEncodedLen for the message, I will add the defensive weight in there as well.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@wischli, this is the thread about the weights

Copy link
Contributor

@lemunozm lemunozm Aug 2, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The weight is something we need to discuss; I thought 5_000_000_000 was excessively high for what, in my mind, is the worst case. Although probably 200_000_000 is pretty low. But choosing 5_000_000_000 is fine for me. Just a bit worried about filling fastly the block with batches

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@lemunozm IMO being defensive is desired and not an issue for now since we don't expect LP messages to explode in cardinality for the time being. As long as we don't have somewhat of more realistic overestimate via benchmarks, this value seems feasible to me, especially since it has not caused trouble so far. 5_000_000_000 computational weight equals 1% of the maximum we allow. Does not feel too defensive to me that we can easily fit processing 50+ messages into, at least, the majority of our blocks. WDYT @lemunozm?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Though knowing that we could fit 1000-1500 transfers into a block if batched (bottleneck is signature verification if submitted in isolation), I suppose we could lower it by half. Nevertheless, I cast my vote for the safer path.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It makes sense what you say above! Let's be super defensive by now 👍🏻

lemunozm
lemunozm previously approved these changes Aug 2, 2024
Copy link
Contributor

@lemunozm lemunozm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool! everything LSGTM (looks Super great to me!!)

Copy link
Contributor

@wischli wischli left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Beautiful and clean PR - great job!

Just one tiny thing bothers me. The new hardcoded weight Weight::from_parts(200_000_000, 4096) seems opaque and IMO we should declare it as a const at least because it is used quite often.

pallets/liquidity-pools-gateway/queue/src/lib.rs Outdated Show resolved Hide resolved
pallets/liquidity-pools-gateway/queue/src/mock.rs Outdated Show resolved Hide resolved
@@ -243,32 +204,6 @@ pub mod pallet {
pub type RelayerList<T: Config> =
StorageDoubleMap<_, Blake2_128Concat, Domain, Blake2_128Concat, DomainAddress, ()>;

#[pallet::storage]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note: Then we need to write a migration to kill those storages. Can be done in a later PR.

pallets/liquidity-pools-gateway/src/lib.rs Outdated Show resolved Hide resolved
runtime/integration-tests/src/cases/lp/utils.rs Outdated Show resolved Hide resolved
@cdamian cdamian force-pushed the feat/lp-v2-use-gateway-queue branch from 29e36c3 to 8e399a6 Compare August 2, 2024 17:48
Copy link
Contributor

@lemunozm lemunozm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome stuff! Love it!

Copy link
Contributor

@wischli wischli left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Let's merge this beauty

@cdamian cdamian merged commit fc2b451 into main Aug 5, 2024
12 of 14 checks passed
@cdamian cdamian deleted the feat/lp-v2-use-gateway-queue branch August 5, 2024 08:47
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants