-
Notifications
You must be signed in to change notification settings - Fork 84
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feat/lp v2 use gateway queue #1943
Conversation
To track the history, previous Pr with the view of this can be found: #1937 |
@@ -243,32 +204,6 @@ pub mod pallet { | |||
pub type RelayerList<T: Config> = | |||
StorageDoubleMap<_, Blake2_128Concat, Domain, Blake2_128Concat, DomainAddress, ()>; | |||
|
|||
#[pallet::storage] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removed these.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note: Then we need to write a migration to kill those storages. Can be done in a later PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, let's do that separately then.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just curious, if the storage is totally empty, do we need to remove it then? I think only OutboundMessageNonceStore
need to be removed
…d specific inbound tests
9ac3c61
to
967e86c
Compare
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #1943 +/- ##
==========================================
- Coverage 47.84% 47.70% -0.14%
==========================================
Files 184 184
Lines 13038 12969 -69
==========================================
- Hits 6238 6187 -51
+ Misses 6800 6782 -18 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I really think this is a super great improvement in our database in several aspects. Love it! Thanks for your work on this @cdamian
I left some minor NIT things and one major related to the weights, that IMO should be fixed before merging.
@@ -28,6 +28,7 @@ mod benchmarks { | |||
#[benchmark] | |||
fn process_message() -> Result<(), BenchmarkError> { | |||
let caller: T::AccountId = account("acc_0", 0, 0); | |||
//TODO(cdamian): Add a way of retrieving the "heaviest" message. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reminder. But I would leave it as it is in this PR. I need to know how to fit this with the batch solution. It will need another thought
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
BTW, being defensive means that we can avoid these benchmarks I think. So there is no longer an issue here. We can even remove the benchmarks & weights from this 🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm OK with this, @wischli?
match T::InboundMessageHandler::handle(domain_address, message) { | ||
Ok(_) => (Ok(()), weight), | ||
Err(e) => (Err(e), weight), | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The computation of T::InboundMessageHandler::handle(domain_address, message)
is not counted in the weight
and it should. Note that the inner computation of T::InboundMessageHandler
can be a lot. i.e. if the message is something investment-related it can imply changes in foreign investment, opening swap orders, investments, etc...
Nevertheless, there is no easy solution here. I would add manually an extra weight. I would say 200_000_000
and 4096
of PoV can be enough for all cases by now. Interesting in your thoughts @wischli
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would add that extra weight to the 2 extrinsics of the LP gateway queue that can end up processing this message when calling MessageProcessor::process
. The only other place where this is called is on_idle
but in that case we use the weight that's returned.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But we're still skipping the weight on the on_idle
, right?
on_idle
uses the weight from T::MessageProcessor::process
, which only returns Weight::from_parts(0, T::Message::max_encoded_len() as u64)
. We're not taking into account the processed message weight. I think we should add the above weights to process_inbound_message
as well. To get on_idle
to compute correctly all weights
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, right, for inbound messages we only return the MaxEncodedLen
for the message, I will add the defensive weight in there as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@wischli, this is the thread about the weights
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The weight is something we need to discuss; I thought 5_000_000_000
was excessively high for what, in my mind, is the worst case. Although probably 200_000_000
is pretty low. But choosing 5_000_000_000
is fine for me. Just a bit worried about filling fastly the block with batches
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@lemunozm IMO being defensive is desired and not an issue for now since we don't expect LP messages to explode in cardinality for the time being. As long as we don't have somewhat of more realistic overestimate via benchmarks, this value seems feasible to me, especially since it has not caused trouble so far. 5_000_000_000
computational weight equals 1% of the maximum we allow. Does not feel too defensive to me that we can easily fit processing 50+ messages into, at least, the majority of our blocks. WDYT @lemunozm?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Though knowing that we could fit 1000-1500 transfers into a block if batched (bottleneck is signature verification if submitted in isolation), I suppose we could lower it by half. Nevertheless, I cast my vote for the safer path.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It makes sense what you say above! Let's be super defensive by now 👍🏻
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool! everything LSGTM (looks Super great to me!!)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Beautiful and clean PR - great job!
Just one tiny thing bothers me. The new hardcoded weight Weight::from_parts(200_000_000, 4096)
seems opaque and IMO we should declare it as a const at least because it is used quite often.
@@ -243,32 +204,6 @@ pub mod pallet { | |||
pub type RelayerList<T: Config> = | |||
StorageDoubleMap<_, Blake2_128Concat, Domain, Blake2_128Concat, DomainAddress, ()>; | |||
|
|||
#[pallet::storage] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note: Then we need to write a migration to kill those storages. Can be done in a later PR.
29e36c3
to
8e399a6
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome stuff! Love it!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! Let's merge this beauty
Description
Use the new Gateway Queue pallet in LP and LP gateway.
Changes and Descriptions
pallet-liquidity-pools-gateway-queue
for queueing inbound and outbound message in theLP gateway
.LP
andLP gateway
, as follows:MessageQueue
trait is implemented by theLP gateway queue
and used by theLP gateway
to enqueue both inbound and outbound messages.MesageProcessor
trait is implemented by theLP gateway
and used by theLP gateway queue
when processing queued messages.OutboundMessageHandler
trait is implemented by theLP gateway
and used byLP
when sending an outbound message.InboundMessageHandler
trait is implemented byLP
and used by theLP gateway
when processing inbound messages.Checklist: