Skip to content

Conversation

@m-Peter
Copy link
Collaborator

@m-Peter m-Peter commented Oct 31, 2025

Closes: #907

Description

There were some cases where trading bots would submit 6-7 identical transactions, and because their gas limit would be around 5.5 million, it would fail with an insufficient computation error. Even though one of the transactions could possibly be executed successfully.


For contributor use:

  • Targeted PR against master branch
  • Linked to Github issue with discussion and accepted design OR link to spec that describes this work.
  • Code follows the standards mentioned here.
  • Updated relevant documentation
  • Re-reviewed Files changed in the Github PR explorer
  • Added appropriate labels

Summary by CodeRabbit

  • Bug Fixes

    • Prevent duplicate transaction submissions by tracking transaction hashes per account; duplicate submissions are now rejected with a clear "transaction already in pool" error.
  • Tests

    • Added an end-to-end test that validates duplicate submissions are rejected while valid transactions execute and produce successful receipts.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 31, 2025

Walkthrough

Adds txHash tracking to pooled EVM transactions, prevents duplicate submissions by checking txHash presence in the pool and returning ErrDuplicateTransaction, and includes a test that submits identical payloads to assert duplicate rejections and successful receipt handling for accepted transactions.

Changes

Cohort / File(s) Summary
Batch tx pool deduplication
services/requester/batch_tx_pool.go
Adds slices import and errs alias for models/errors; extends pooledEvmTx with txHash; sets txHash (and nonce) when creating pooled entries; checks existing pool entries with slices.Contains and returns ErrDuplicateTransaction on duplicates.
Duplicate submission test
tests/tx_batching_test.go
Adds Test_MultipleTransactionSubmissionsWithDuplicates: submits an initial tx and then multiple identical signed payloads; expects duplicate errors for repeats, collects successful tx hashes, and polls receipts asserting Status == 1.
New error declaration
models/errors/errors.go
Adds public error ErrDuplicateTransaction wrapping ErrInvalid with message "transaction already in pool".

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant Client
  participant BatchPool
  participant Submitter
  participant Gateway

  Client->>BatchPool: Add(signedTx, txHash, nonce)
  BatchPool->>BatchPool: create pooledEvmTx {txHash, nonce, raw}
  BatchPool->>BatchPool: check pool for txHash (slices.Contains)
  alt duplicate found
    BatchPool-->>Client: ErrDuplicateTransaction
  else new tx
    BatchPool-->>Client: accepted
  end
  Note right of BatchPool: periodic flush / build batch
  BatchPool->>BatchPool: sort/compact unique by txHash
  BatchPool->>Submitter: send batch of unique hexTxs
  Submitter->>Gateway: submit batch
  Gateway-->>Submitter: response
  Submitter-->>BatchPool: update/remove submitted txs
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

  • Pay special attention to services/requester/batch_tx_pool.go Add() paths to ensure txHash is always populated and used consistently.
  • Verify the slices.Contains check operates on the correct slice and compares the intended value (txHash).
  • Review the new test Test_MultipleTransactionSubmissionsWithDuplicates for determinism and correct error assertions.

Possibly related PRs

Suggested labels

EVM

Suggested reviewers

  • zhangchiqing
  • janezpodhostnik
  • peterargue

Poem

🐰 I hopped through pools of hex and hash,

Found repeated hops and gave them no pass,
Nonces marched forward, duplicates kept at bay,
Batches tidy, receipts glowing all day,
A carrot-coded cheer — hop, hop, hooray! 🥕

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly and specifically describes the main change: deduplicating transactions on the BatchTxPool before submission.
Linked Issues check ✅ Passed The PR successfully addresses issue #907 by implementing duplicate transaction prevention in the pool at insertion time and returning ErrDuplicateTransaction when duplicates are detected.
Out of Scope Changes check ✅ Passed All changes are directly scoped to the objective of deduplicating transactions: new error type, pool struct extension, duplicate checking logic, and corresponding test coverage.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch mpeter/deduplicate-batched-txs

📜 Recent review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7251e99 and 3ffbddd.

📒 Files selected for processing (1)
  • tests/tx_batching_test.go (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • tests/tx_batching_test.go
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Test

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 638fe2e and c62c88b.

📒 Files selected for processing (2)
  • services/requester/batch_tx_pool.go (4 hunks)
  • tests/tx_batching_test.go (1 hunks)
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-06-19T11:36:25.478Z
Learnt from: m-Peter
Repo: onflow/flow-evm-gateway PR: 831
File: services/requester/batch_tx_pool.go:0-0
Timestamp: 2025-06-19T11:36:25.478Z
Learning: In Go, when copying maps that contain slices (like `map[gethCommon.Address][]pooledEvmTx`), perform deep copies by iterating over the map and copying each slice individually using `make()` and `copy()` to avoid shared references that could lead to race conditions and data corruption.

Applied to files:

  • services/requester/batch_tx_pool.go
🪛 GitHub Check: Lint
services/requester/batch_tx_pool.go

[failure] 214-214:
S1008: should use 'return a.txHash == b.txHash' instead of 'if a.txHash == b.txHash { return true }; return false' (staticcheck)


[failure] 213-213:
unusedresult: result of slices.CompactFunc call not used (govet)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Test
🔇 Additional comments (4)
services/requester/batch_tx_pool.go (3)

6-6: LGTM: Import added for deduplication.

The slices package import is correctly added to support the CompactFunc used for filtering duplicate transactions.


28-28: LGTM: Transaction hash field added for deduplication.

The txHash field enables duplicate detection during batch submission, which addresses the issue of identical transactions being submitted multiple times.


148-148: LGTM: Transaction hash correctly captured.

The transaction hash is properly stored when adding transactions to the pool, enabling deduplication logic downstream.

tests/tx_batching_test.go (1)

517-562: LGTM: Test validates duplicate transaction handling.

The test correctly validates the deduplication feature:

  • Sends an initial transaction with nonce=0
  • Sends 5 duplicate transactions with nonce=1 (identical payloads)
  • Verifies all receipts eventually succeed

Since duplicate transactions produce identical hashes, the hashes slice contains repeated values, and the test verifies the same receipts multiple times. This correctly validates that duplicates are handled without causing execution failures.

@m-Peter m-Peter force-pushed the mpeter/deduplicate-batched-txs branch from c62c88b to d023295 Compare October 31, 2025 10:30
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c62c88b and d023295.

📒 Files selected for processing (2)
  • services/requester/batch_tx_pool.go (4 hunks)
  • tests/tx_batching_test.go (1 hunks)
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-06-19T11:36:25.478Z
Learnt from: m-Peter
Repo: onflow/flow-evm-gateway PR: 831
File: services/requester/batch_tx_pool.go:0-0
Timestamp: 2025-06-19T11:36:25.478Z
Learning: In Go, when copying maps that contain slices (like `map[gethCommon.Address][]pooledEvmTx`), perform deep copies by iterating over the map and copying each slice individually using `make()` and `copy()` to avoid shared references that could lead to race conditions and data corruption.

Applied to files:

  • services/requester/batch_tx_pool.go
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Test

@m-Peter m-Peter force-pushed the mpeter/deduplicate-batched-txs branch from d023295 to c2b098a Compare October 31, 2025 11:04
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
services/requester/batch_tx_pool.go (1)

222-225: LGTM! Payload building is clean and efficient.

Preallocating hexEncodedTxs with the exact length of uniqueTxs and directly iterating over the deduplicated transactions is the correct approach.

Optional: Consider adding observability for duplicate detection.

For monitoring bot behavior and validating the fix, you could log or track metrics on the number of duplicates filtered:

 	seen[tx.txHash] = struct{}{}
 	uniqueTxs = append(uniqueTxs, tx)
 }
+
+if duplicates := len(pooledTxs) - len(uniqueTxs); duplicates > 0 {
+	t.logger.Debug().
+		Int("original", len(pooledTxs)).
+		Int("unique", len(uniqueTxs)).
+		Int("duplicates", duplicates).
+		Msg("filtered duplicate transactions in batch")
+}

This would help confirm the deduplication is working as intended and provide insights into how often duplicates occur.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d023295 and c2b098a.

📒 Files selected for processing (2)
  • services/requester/batch_tx_pool.go (3 hunks)
  • tests/tx_batching_test.go (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • tests/tx_batching_test.go
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-06-19T11:36:25.478Z
Learnt from: m-Peter
Repo: onflow/flow-evm-gateway PR: 831
File: services/requester/batch_tx_pool.go:0-0
Timestamp: 2025-06-19T11:36:25.478Z
Learning: In Go, when copying maps that contain slices (like `map[gethCommon.Address][]pooledEvmTx`), perform deep copies by iterating over the map and copying each slice individually using `make()` and `copy()` to avoid shared references that could lead to race conditions and data corruption.

Applied to files:

  • services/requester/batch_tx_pool.go
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Test
🔇 Additional comments (3)
services/requester/batch_tx_pool.go (3)

25-29: LGTM! Transaction hash tracking added correctly.

Adding the txHash field to pooledEvmTx enables deduplication by transaction hash, which is the right approach for identifying identical transactions.


147-149: LGTM! Transaction hash captured correctly when pooling.

The txHash is computed from tx.Hash() and stored alongside the payload and nonce. This is the correct location to capture the hash, as it's protected by the mutex and only applies to transactions being pooled for batch submission.


211-220: LGTM! Deduplication correctly handles non-adjacent duplicates.

The map-based deduplication approach correctly addresses the concern from previous reviews about slices.CompactFunc only removing adjacent duplicates. Using a seen set ensures that all duplicate transactions are filtered out regardless of their position in the sorted slice, preventing the "insufficient computation" errors caused by redundant submissions.

The pooledTxs[:0] pattern efficiently reuses the underlying array while safely building the deduplicated slice.

Copy link
Contributor

@janezpodhostnik janezpodhostnik left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The code looks good but should we instead prevent adding duplicates to the poll in the first place, instead of deduplicating them later?

@m-Peter m-Peter force-pushed the mpeter/deduplicate-batched-txs branch from c2b098a to d22b265 Compare November 6, 2025 10:46
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
services/requester/batch_tx_pool.go (1)

149-154: LGTM: Duplicate detection correctly prevents redundant submissions.

The approach of rejecting duplicates at submission time (via slices.Contains) is cleaner than deduplicating during batch processing—it provides immediate feedback and avoids the "adjacent duplicates only" pitfall of slices.CompactFunc. The txMux lock ensures thread-safe duplicate checking.

Note: slices.Contains performs an O(n) scan, but given that the pool is cleared every TxBatchInterval and the expected transaction count per EOA is small, this is acceptable.

If you observe high transaction volumes per EOA in production, consider switching to a map-based lookup for O(1) duplicate detection:

// In BatchTxPool struct, add:
pooledTxHashes map[gethCommon.Address]map[gethCommon.Hash]struct{}

// In Add(), replace slices.Contains with:
if t.pooledTxHashes[from] == nil {
    t.pooledTxHashes[from] = make(map[gethCommon.Hash]struct{})
}
if _, exists := t.pooledTxHashes[from][tx.Hash()]; exists {
    return errs.ErrDuplicateTransaction
}
t.pooledTxHashes[from][tx.Hash()] = struct{}{}
t.pooledTxs[from] = append(t.pooledTxs[from], userTx)

// In processPooledTransactions(), also clear:
t.pooledTxHashes = make(map[gethCommon.Address]map[gethCommon.Hash]struct{})
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c2b098a and d22b265.

📒 Files selected for processing (3)
  • models/errors/errors.go (1 hunks)
  • services/requester/batch_tx_pool.go (3 hunks)
  • tests/tx_batching_test.go (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • tests/tx_batching_test.go
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2025-06-19T11:36:25.478Z
Learnt from: m-Peter
Repo: onflow/flow-evm-gateway PR: 831
File: services/requester/batch_tx_pool.go:0-0
Timestamp: 2025-06-19T11:36:25.478Z
Learning: In Go, when copying maps that contain slices (like `map[gethCommon.Address][]pooledEvmTx`), perform deep copies by iterating over the map and copying each slice individually using `make()` and `copy()` to avoid shared references that could lead to race conditions and data corruption.

Applied to files:

  • services/requester/batch_tx_pool.go
📚 Learning: 2025-03-07T01:35:09.751Z
Learnt from: peterargue
Repo: onflow/flow-evm-gateway PR: 772
File: services/requester/keystore/key_store.go:50-62
Timestamp: 2025-03-07T01:35:09.751Z
Learning: In the flow-evm-gateway codebase, panics are acceptable in scenarios where immediate detection of critical bugs is desired during development and testing, particularly for invariant violations that should never occur in a correctly functioning system (e.g., when a key is available but locked in the keystore implementation).

Applied to files:

  • services/requester/batch_tx_pool.go
🧬 Code graph analysis (1)
services/requester/batch_tx_pool.go (1)
models/errors/errors.go (1)
  • ErrDuplicateTransaction (33-33)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Test
🔇 Additional comments (4)
services/requester/batch_tx_pool.go (3)

6-6: LGTM: Import additions are appropriate.

The slices standard library import and errs alias are used correctly for duplicate detection and error handling respectively.

Also applies to: 21-21


27-31: LGTM: txHash field addition enables duplicate detection.

The new txHash field correctly supports duplicate transaction detection. Since gethCommon.Hash is a comparable array type [32]byte, struct equality via slices.Contains will work correctly for identifying duplicate transactions.


206-221: LGTM: Batch submission logic correctly processes deduplicated transactions.

Sorting by nonce ensures the correct execution order, and since duplicates are already filtered at submission time in Add(), no additional deduplication is needed here. Clean separation of concerns.

models/errors/errors.go (1)

31-33: LGTM: Error definition is clear and correctly categorized.

ErrDuplicateTransaction appropriately wraps ErrInvalid and provides a clear message for rejected duplicate submissions. The error is used correctly in the batch transaction pool to provide meaningful feedback when duplicate transactions are detected.

@m-Peter
Copy link
Collaborator Author

m-Peter commented Nov 6, 2025

The code looks good but should we instead prevent adding duplicates to the poll in the first place, instead of deduplicating them later?

@janezpodhostnik That's a good idea. This way we can even be explicit to the user, that they have submitted a tx which is already in the pool. Updated in d22b265 .

@m-Peter m-Peter force-pushed the mpeter/deduplicate-batched-txs branch from d22b265 to 86a8dac Compare November 12, 2025 10:37
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
tests/tx_batching_test.go (2)

532-534: Consider clarifying the nonce strategy.

The nonce is incremented here to set up the duplicate scenario, where all subsequent transactions in the loop use nonce 1. This is correct but could benefit from a brief comment explaining that the nonce is deliberately not incremented in the loop to create identical transactions.

Example:

 	signed, _, err := evmSign(big.NewInt(10), 21000, eoaKey, nonce, &testAddr, nil)
 	require.NoError(t, err)
+	// Increment nonce for the duplicate test transactions that follow
 	nonce += 1

540-558: Consider documenting the iteration count.

The loop runs 5 times to test multiple duplicate submissions, which is reasonable. However, the number "5" is a magic constant without explanation. Consider adding a brief comment or using a named constant to clarify the intent.

Example:

+	// Submit 5 identical transactions to test duplicate detection:
+	// the first should succeed, the rest should be rejected as duplicates
 	for i := range 5 {
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d22b265 and 86a8dac.

📒 Files selected for processing (3)
  • models/errors/errors.go (1 hunks)
  • services/requester/batch_tx_pool.go (3 hunks)
  • tests/tx_batching_test.go (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • services/requester/batch_tx_pool.go
  • models/errors/errors.go
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Test
🔇 Additional comments (1)
tests/tx_batching_test.go (1)

517-570: Well-structured test for duplicate transaction prevention.

The test correctly validates the duplicate detection mechanism by:

  1. Submitting an initial transaction (nonce 0)
  2. Submitting 5 identical transactions (all nonce 1) where only the first succeeds
  3. Verifying the duplicate error message matches expectations
  4. Confirming successful transactions execute correctly

The test follows established patterns in this file and properly uses assertion helpers.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
tests/tx_batching_test.go (1)

539-561: Sign once and replay the identical raw payload

Right now we call evmSign inside the loop, so every duplicate submission is freshly signed. That happens to work because go-ethereum currently produces deterministic signatures, but if the signer ever adds entropy (or we swap libraries), these calls would yield different v/r/s values, the hashes would diverge, and the test would stop exercising actual duplicate detection. To make the scenario robust—and to mirror bots resending the exact same raw payload—sign once, keep the bytes, and reuse them for the remaining submissions.

-	// Submit 5 identical transactions to test duplicate detection:
-	// the first should succeed, the rest should be rejected as duplicates
-	for i := range 5 {
-		// All these transactions are duplicates, since we don't change any
-		// of the payload data. These will end up having the same tx hash
-		// as well.
-		signed, _, err := evmSign(big.NewInt(10), 15_000_000, eoaKey, nonce, &testAddr, nil)
-		require.NoError(t, err)
-
-		// only the first transaction is valid, the rest 4 are duplicates
-		// of the 1st one.
-		if i == 0 {
-			txHash, err := rpcTester.sendRawTx(signed)
-			require.NoError(t, err)
-			hashes = append(hashes, txHash)
-		} else {
-			_, err := rpcTester.sendRawTx(signed)
-			require.Error(t, err)
-			require.ErrorContains(t, err, "invalid: transaction already in pool")
-		}
-	}
+	dupSigned, _, err := evmSign(big.NewInt(10), 15_000_000, eoaKey, nonce, &testAddr, nil)
+	require.NoError(t, err)
+
+	// Submit 5 identical transactions to test duplicate detection:
+	// the first should succeed, the rest should be rejected as duplicates.
+	for i := range 5 {
+		if i == 0 {
+			txHash, err := rpcTester.sendRawTx(dupSigned)
+			require.NoError(t, err)
+			hashes = append(hashes, txHash)
+		} else {
+			_, err := rpcTester.sendRawTx(dupSigned)
+			require.Error(t, err)
+			require.ErrorContains(t, err, "invalid: transaction already in pool")
+		}
+	}
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 86a8dac and 7251e99.

📒 Files selected for processing (1)
  • tests/tx_batching_test.go (1 hunks)
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-11-12T13:21:59.066Z
Learnt from: m-Peter
Repo: onflow/flow-evm-gateway PR: 890
File: tests/integration_test.go:692-707
Timestamp: 2025-11-12T13:21:59.066Z
Learning: In the flow-evm-gateway integration tests (tests/integration_test.go), the gateway's RPC server continues to operate independently even after the context passed to bootstrap.Run() is cancelled. Context cancellation affects upstream dependencies (like the Emulator acting as an Access Node) but does not immediately shut down the RPC server, allowing tests to verify backup failover scenarios.

Applied to files:

  • tests/tx_batching_test.go
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Test

Copy link
Contributor

@janezpodhostnik janezpodhostnik left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice

userTx := pooledEvmTx{txPayload: hexEncodedTx, nonce: tx.Nonce()}
userTx := pooledEvmTx{txPayload: hexEncodedTx, txHash: tx.Hash(), nonce: tx.Nonce()}
// Prevent submission of duplicate transactions, based on their tx hash
if slices.Contains(t.pooledTxs[from], userTx) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of putting the txHash to the pooledEvmTx, what about maintaining a separate index for the txHash?

So that this existence check can be done in O(1), instead of O(n), n being the number of txs in the pool for a single address. Otherwise, if some bot sends lots of txs, this call would become slow.

For instance:

type BatchTxPool struct {
	*SingleTxPool
	pooledTxs   map[gethCommon.Address][]pooledEvmTx
    pooledTxHashs map[txHash]struct{}
	txMux       sync.Mutex
	eoaActivity *expirable.LRU[gethCommon.Address, time.Time]
}

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The pooledTxs get submitted to the network around every ~2 seconds, so we are talking about a very small number of transactions for a single address. The largest I have observed is 12 transactions for a single address, which is quite small. Do you think it's worth the complexity of maintaining a separate index for de-duplicating? We will also have to remove entries from that separate index, when transactions do get submitted, which means it will require some sync mechanism.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

De-duplicate transactions on batch tx pool

4 participants