Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New txpool worker to remove lock contention #2725

Merged
merged 47 commits into from
Feb 22, 2025
Merged

Conversation

AurelienFT
Copy link
Contributor

@AurelienFT AurelienFT commented Feb 18, 2025

Description

Problem

When running benches on transaction pool we observed that each thread trying to submit to the same pool with a thread was a costly behavior.

Solution in this PR

We changed the pool to live only in one thread and all communications are managed through channels where priority can be explicitly defined between each tasks.

Side modifications

Now we are sending tx through P2P after their full insertion inside the pool. Before it was done after one round of verification but before the final checks and the pool integration.

Now we are not doing any pool verifications before computing the predicates and the signatures. This pool and input verifications step is now done after the predicate and signatures verifications inside of the pool worker thread.

Checklist

  • Breaking changes are clearly marked as such in the PR description and changelog
  • New behavior is reflected in tests
  • The specification matches the implemented behavior (link update PR if changes are needed)

Before requesting review

  • I have reviewed the code myself
  • I have created follow-up issues caused by this PR and linked them here

@AurelienFT AurelienFT marked this pull request as ready for review February 18, 2025 15:20
@AurelienFT
Copy link
Contributor Author

Ty for improvements on the source it's more clear like that.

Not decrease the reputation of the P2P node if insertion into TxPool failed
xgreenx
xgreenx previously approved these changes Feb 19, 2025
xgreenx
xgreenx previously approved these changes Feb 19, 2025
Copy link
Contributor

@rafal-ch rafal-ch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM 👍
Nice one :)

@AurelienFT AurelienFT enabled auto-merge (squash) February 20, 2025 17:48
Comment on lines +1 to +2
// Define arguments

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// Define arguments

@@ -77,11 +77,11 @@ pub struct TxPoolArgs {
pub tx_size_of_p2p_sync_queue: usize,

/// Maximum number of pending write requests in the service.
#[clap(long = "tx-max-pending-write-requests", default_value = "500", env)]
#[clap(long = "tx-max-pending-write-requests", default_value = "10000", env)]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we have to communicate this to external operators>?

.height_expiration_txs
.range(..=new_height)
.map(|(k, _)| *k)
.collect::<Vec<_>>();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we need this collect?

the below loop should work without it too?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It will not because iterator has read access to the self.pruner.height_expiration_txs

@AurelienFT AurelienFT merged commit 5e8692f into master Feb 22, 2025
33 of 34 checks passed
@AurelienFT AurelienFT deleted the new_tx_pool_architecture branch February 22, 2025 03:28
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants