-
Notifications
You must be signed in to change notification settings - Fork 86
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LPv2: Batch message logic (r2) #1949
Conversation
// TODO: do we really need to return the weights in send() if later we use the | ||
// defensive ones? | ||
let (result, router_weight) = match router.send(sender, serialized) { | ||
Ok(dispatch_info) => (Ok(()), dispatch_info.actual_weight), | ||
Err(e) => (Err(e.error), e.post_info.actual_weight), | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Below, we're adding the defensive weights
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would only return defensive weights when there's no way of finding the weight for an operation. In this case, the router does have some weight and I think we should use that, maybe alongside the defensive ones if we really wanna be extra-defensive?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mmmm... I see. The defensive one seems to already shadow anything returned by the router. So, I would either only use the router one or only use the defensive one.
But not super strong opinion
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would also use either the "real" or the defensive weight. If the weights of send
can be assumed to be representative, then let's go with that. Else, ignore it and fallback to the defensive one for now.
fn start_pack_messages() -> Weight { | ||
// TODO: BENCHMARK CORRECTLY | ||
// | ||
// NOTE: Reasonable weight taken from `PoolSystem::set_max_reserve` | ||
// This one has one read and one write for sure and possible one | ||
// read for `AdminOrigin` | ||
Weight::from_parts(30_117_000, 5991) | ||
.saturating_add(RocksDbWeight::get().reads(1)) | ||
.saturating_add(RocksDbWeight::get().writes(1)) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TechDebt: We should factorize the weights of this file and use some auxiliary methods instead (similar to what William has done in its PR about freezing)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yup, we have a todo to use real weights for most of the gateway stuff.
Still missings UTs, will do later, but the PR is ready to be reviewed |
#[pallet::call_index(5)] | ||
pub fn process_msg( | ||
pub fn receive_message( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NOTE. It's true that now we can receive batches, but from @cdamian changes in the queue, this method will no longer process messages, just queuing, which means that it will only take some more bytes as PoV. The current defensive weights are enough for that slight increment
router_weight | ||
.unwrap_or(Weight::from_parts(LP_DEFENSIVE_WEIGHT_REF_TIME, 0)) | ||
.saturating_add(read_weight) | ||
.saturating_add(Weight::from_parts(0, serialized_len)), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NOTE we need here the serialized len instead of max_decoded_len()
because the size used internally by the router is the size of the vector. The router doesn't know about Message
type
@@ -1188,31 +1094,27 @@ mod message_processor_impl { | |||
assert_eq!(mock_sender, expected_sender); | |||
assert_eq!(mock_message, expected_message.serialize()); | |||
|
|||
Err(DispatchErrorWithPostInfo { | |||
Err(dbg!(DispatchErrorWithPostInfo { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Was this a one-time change that you used when debugging or should we sprinkle it around some more? :D
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch! It was a one-time change 😅
#[pallet::weight(T::WeightInfo::start_pack_messages())] | ||
#[pallet::call_index(9)] | ||
pub fn start_pack_messages(origin: OriginFor<T>, destination: Domain) -> DispatchResult { | ||
let sender = ensure_signed(origin)?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have a couple of questions to make sure I understand the expected flow.
- Who is expected to call these packing extrinsics?
- From what I can tell, in the
OutboundMessageHandler
impl below, it would be asender
coming from LP. An LPsender
can be some address that comes from EVM, right? If so, how does that address manage to call these extrinsics?
- From what I can tell, in the
- How do we handle cases where the supposed caller errors out between start and end?
- Should we maybe consider adding a way to manually end a batch for a particular (sender, destination) pair via root?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very good questions!
Who is expected to call these packing extrinsics?
The same as who call the extrinsics from pallet-liquidity-pools
. It does not need to be an address from EVM. Just a normal account.
How do we handle cases where the supposed caller errors out between start and end?
If some call between start and end fails, the user notice that. Note that each call in between is not yet processed by the on_idle()
, so it fails just in that moment.
As a resume of how all of this work:
Gateway::start_pack_messages(ALICE)
LiquidityPools::add_pool(ALICE)
LiquidityPools::add_tranche(ALICE) // -> If it fails, ALICE will know it and can choose to try again with the correct params before calling end
Gateway::end_pack_messages(ALICE) // -> Now, the batch message is on the queue to be sent in the `on_idle()` function.
The above can be done in several blocks, but you can also batch all of them. If one fails, nothing is applied.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, thank you for the explanation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome explanation, thanks for that @lemunozm !
IMO, in general we can assume that {start, end}_packing_messages
will only be called locally from a Substrate account because EVM accounts would call it on their EVM chain of choice such that we receive the LP batch message.
I'm OK with this. If we merge this, I will rebase my multi-router branch, then you can continue with the rest of the stuff here. |
@cdamian you will have quite conflicts but wanted to merge this ASAP to avoid even more 😅 Tell me if I can help you in the rebase 🙏🏻, or with some tricky union of some part |
|
||
#[cfg(any(test, feature = "std"))] | ||
pub mod test_util { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Moved to the gateway/mock.rs
pub enum Message { | ||
#[default] | ||
Simple, | ||
Pack(Vec<Message>), | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should extend this with the Proof
variant from your PR @cdamian
let mut count = 0; | ||
for submessage in message.unpack() { | ||
count += 1; | ||
|
||
if let Err(e) = T::InboundMessageHandler::handle(domain_address.clone(), submessage) | ||
{ | ||
return (Err(e), weight.saturating_mul(count)); | ||
} | ||
} | ||
|
||
(Ok(()), weight.saturating_mul(count)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TODO. Debt to myself for the next PR. I think this could be an issue if we exceed the maximum during the batch processing. We should check that we can use all message weight before processing anything.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed in #1951
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #1949 +/- ##
==========================================
- Coverage 47.72% 47.69% -0.04%
==========================================
Files 184 183 -1
Lines 12969 12953 -16
==========================================
- Hits 6190 6178 -12
+ Misses 6779 6775 -4 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
#[pallet::weight(T::WeightInfo::start_pack_messages())] | ||
#[pallet::call_index(9)] | ||
pub fn start_pack_messages(origin: OriginFor<T>, destination: Domain) -> DispatchResult { | ||
let sender = ensure_signed(origin)?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome explanation, thanks for that @lemunozm !
IMO, in general we can assume that {start, end}_packing_messages
will only be called locally from a Substrate account because EVM accounts would call it on their EVM chain of choice such that we receive the LP batch message.
// TODO: do we really need to return the weights in send() if later we use the | ||
// defensive ones? | ||
let (result, router_weight) = match router.send(sender, serialized) { | ||
Ok(dispatch_info) => (Ok(()), dispatch_info.actual_weight), | ||
Err(e) => (Err(e.error), e.post_info.actual_weight), | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would also use either the "real" or the defensive weight. If the weights of send
can be assumed to be representative, then let's go with that. Else, ignore it and fallback to the defensive one for now.
Description
This PR overrides the previous #1923, in a simplified version
Responsibilities
pallet-liquidity-pools-gateway
knows to batch/unbatch but doesn't know what aBatch
message is. To handle this, theLPMessage
trait allows pack/unpack methods.pallet-liquidity-pools
will never send or receive a batch. The gateway is responsible for the batching process.Details
Batching is handled by two extrinsic:
Gateway::start_batch_message(sender, destination)
: When calling this, any submitted messages from LP, that match thesender
&destination
will be packed. NOTE: We need to index also by destination domain because the destination information is not inside the message. So, we can never bundle two messages with two different destinations.Gateway::end_batch_message(sender, destination)
: When calling this, the batch is finally submitted. If this is not called, no message is sent. If the batch reaches the limit size, it will dispatch an error.Later, the user can call
pallet_utility::batch(calls)
, and any message sent betweenstart_pack
andend_pack
will be bundled in the batch.Changes
Gateway::start_batch_message()
andGateway::end_batch_message()
to allow creating batches.process_msg()
toreceive_message()
.try_range
for header message deserialization and use aBufferReader
.Docs
See the updated diagrams: https://centrifuge.hackmd.io/i_wfI19XTgSkfWeN7IxXiQ?view