-
Notifications
You must be signed in to change notification settings - Fork 2.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
New txpool worker to remove lock contention #2725
Conversation
Ty for improvements on the source it's more clear like that. |
Not decrease the reputation of the P2P node if insertion into TxPool failed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 👍
Nice one :)
// Define arguments | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// Define arguments |
@@ -77,11 +77,11 @@ pub struct TxPoolArgs { | |||
pub tx_size_of_p2p_sync_queue: usize, | |||
|
|||
/// Maximum number of pending write requests in the service. | |||
#[clap(long = "tx-max-pending-write-requests", default_value = "500", env)] | |||
#[clap(long = "tx-max-pending-write-requests", default_value = "10000", env)] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we have to communicate this to external operators>?
.height_expiration_txs | ||
.range(..=new_height) | ||
.map(|(k, _)| *k) | ||
.collect::<Vec<_>>(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we need this collect
?
the below loop should work without it too?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It will not because iterator has read access to the self.pruner.height_expiration_txs
Description
Problem
When running benches on transaction pool we observed that each thread trying to submit to the same pool with a thread was a costly behavior.
Solution in this PR
We changed the pool to live only in one thread and all communications are managed through channels where priority can be explicitly defined between each tasks.
Side modifications
Now we are sending tx through P2P after their full insertion inside the pool. Before it was done after one round of verification but before the final checks and the pool integration.
Now we are not doing any pool verifications before computing the predicates and the signatures. This pool and input verifications step is now done after the predicate and signatures verifications inside of the pool worker thread.
Checklist
Before requesting review