-
Notifications
You must be signed in to change notification settings - Fork 12
Track the latest finalized block header for tx submission #896
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughTransaction pool constructors now return errors; bootstrap aborts startup if pool creation fails. SingleTxPool caches a Flow block header in an atomic value and refreshes it periodically. BatchTxPool and callers updated to accept/use a reference block header. Tests use flow.DefaultTransactionExpiry. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor Admin as Bootstrap
participant STP as SingleTxPool
participant AN as AccessNode
rect rgb(240,248,255)
note over Admin: Startup -> create pools
Admin->>STP: NewSingleTxPool(ctx, ...)
STP->>AN: GetLatestBlockHeader()
AN-->>STP: BlockHeader
STP-->>Admin: (*SingleTxPool, nil) or error
alt error
Admin-->>Admin: abort startup (wrapped error)
end
end
rect rgb(245,255,250)
note over STP: Background header updater (periodic)
loop every referenceBlockUpdateFrequency
STP->>AN: GetLatestBlockHeader()
AN-->>STP: BlockHeader or error
end
end
rect rgb(255,250,240)
note over Client, STP: Transaction submission uses cached header
participant Client
Client->>STP: Add(tx)
STP->>STP: use cached BlockHeader (getReferenceBlock)
STP->>AN: GetAccount(address)
AN-->>STP: Account
STP-->>Client: enqueue/submit tx
end
sequenceDiagram
autonumber
actor Admin as Bootstrap
participant BTP as BatchTxPool
participant STP as SingleTxPool
Admin->>BTP: NewBatchTxPool(ctx, ...)
BTP->>STP: NewSingleTxPool(ctx, ...)
alt STP error
STP-->>BTP: error
BTP-->>Admin: error
Admin-->>Admin: abort startup
else ok
STP-->>BTP: pool
BTP-->>Admin: (*BatchTxPool, nil)
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Potential attention areas:
Possibly related PRs
Suggested labels
Suggested reviewers
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (4)
🚧 Files skipped from review as they are similar to previous changes (1)
🧰 Additional context used🧠 Learnings (5)📚 Learning: 2025-06-17T10:29:35.941ZApplied to files:
📚 Learning: 2025-06-19T11:36:25.478ZApplied to files:
📚 Learning: 2024-10-18T19:26:37.579ZApplied to files:
📚 Learning: 2025-01-24T20:15:10.908ZApplied to files:
📚 Learning: 2025-10-06T10:14:49.706ZApplied to files:
🧬 Code graph analysis (3)bootstrap/bootstrap.go (2)
services/requester/single_tx_pool.go (3)
services/requester/batch_tx_pool.go (1)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
🔇 Additional comments (7)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
da04f2e to
dce819d
Compare
services/requester/single_tx_pool.go
Outdated
| "github.com/onflow/flow-evm-gateway/services/requester/keystore" | ||
| ) | ||
|
|
||
| var blockHeaderUpdateFrequency = time.Second * 5 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can increase the update frequency (like every 2 or 3 seconds), but since we get the latest finalized (not sealed) block header, I think this is a good value and we're safe. Any objections?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we could make it longer. how about 15 seconds? it just needs to be refreshed every 600 seconds, so it doesn't need to be too frequent
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we need to track the finalized block with polling?
The services/ingestion/engine is subscribing block events for finalized blocks, can we use that to determine a block is finalized?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we could make it longer. how about 15 seconds? it just needs to be refreshed every 600 seconds, so it doesn't need to be too frequent
Good point 👍 Updated in 7651bef .
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we need to track the finalized block with polling?
The services/ingestion/engine is subscribing block events for finalized blocks, can we use that to determine a block is finalized?
I thought about that too, but found out some drawbacks.
On main branch we use the Event Streaming API for EVM events, only on the soft-finality branch we subscribe for finalized blocks. Once the optimistic data sync is implemented, then we'll no longer subscribe for finalized blocks, but instead for EVM events on finalized data.
Suppose that the EVM GW has been down for some time, e.g. 30 minutes, for whatever reason (spork / hcu / network issue or something else). When the node is back up, and we start indexing EVM events, if we use the block from the EVM events as the reference block, then those transactions will be considered expired from the Flow network. We'll have to wait for the node to catch up, so that EVM transactions can go through, which is not ideal.
If we track the finalized block with polling, to use as the reference block ID, then we don't have this issue. Transaction submission is no longer dependent on the indexing state of the node.
services/requester/single_tx_pool.go
Outdated
| ) | ||
| continue | ||
| } | ||
| t.latestBlockHeader = blockHeader |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need any synchronization mechanism for updating t.latestBlockHeader ?
The goroutine which runs trackLatestBlockHeader is the only place that writes to the t.latestBlockHeader field.
There are various places that read it, and only for setting the ReferenceBlockID when preparing a Cadence tx for submission.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Update: I have stored the latestBlockHeader behind an atomic.Value.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
services/requester/batch_tx_pool.go (2)
185-191: Data race: reading latestBlockHeader without synchronizationprocessPooledTransactions passes t.latestBlockHeader while SingleTxPool updates it concurrently. Use an accessor backed by atomic/mutex (see SingleTxPool fix).
Apply this diff after adding getLatestBlockHeader() to SingleTxPool:
- t.latestBlockHeader, + t.getLatestBlockHeader(),
261-268: Data race: reading latestBlockHeader without synchronization (single submission)Replace direct field access with synchronized getter.
- t.latestBlockHeader, + t.getLatestBlockHeader(),services/requester/single_tx_pool.go (1)
114-121: Use synchronized header read when building txAvoid direct field access.
- t.latestBlockHeader, + t.getLatestBlockHeader(),
🧹 Nitpick comments (2)
services/requester/single_tx_pool.go (2)
208-211: Build metadata uses passed header; ensure caller passes synchronized valueNo change needed here once callers use getLatestBlockHeader(). Just a heads-up to avoid reading the field here directly.
23-24: Make update frequency configurableExpose blockHeaderUpdateFrequency via config (with sane default) to tune under different AN latencies.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
bootstrap/bootstrap.go(3 hunks)services/requester/batch_tx_pool.go(8 hunks)services/requester/single_tx_pool.go(5 hunks)tests/helpers.go(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (3)
services/requester/batch_tx_pool.go (1)
services/requester/single_tx_pool.go (1)
NewSingleTxPool(42-73)
services/requester/single_tx_pool.go (4)
services/requester/cross-spork_client.go (1)
CrossSporkClient(108-113)config/config.go (1)
Config(43-124)services/requester/keystore/key_store.go (1)
KeyStore(39-52)services/requester/tx_pool.go (1)
TxPool(16-18)
bootstrap/bootstrap.go (2)
services/requester/batch_tx_pool.go (1)
NewBatchTxPool(52-92)services/requester/single_tx_pool.go (1)
NewSingleTxPool(42-73)
🔇 Additional comments (2)
tests/helpers.go (1)
93-95: Good: use default transaction expirySwitching to flow.DefaultTransactionExpiry removes a magic number and aligns tests with protocol defaults.
bootstrap/bootstrap.go (1)
249-275: Error propagation on tx pool creation is correctCapturing and returning constructor errors here prevents silent startup with a broken pool. LGTM.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (6)
services/requester/batch_tx_pool.go (2)
187-190: Guard against nil latestBlockHeader before batch submissionIf the header is ever unavailable (e.g., transient during startup/tests), buildTransaction will dereference nil. Skip this batch tick when header is nil to avoid noisy errors.
- err := t.batchSubmitTransactionsForSameAddress( + hdr := t.getLatestBlockHeader() + if hdr == nil { + t.logger.Warn().Msg("latest Flow block header unavailable; skipping batch tick") + continue + } + err := t.batchSubmitTransactionsForSameAddress( ctx, - t.getLatestBlockHeader(), + hdr, account, pooledTxs, )
251-266: Early-check latest header in single submission path (optional)Same nil risk exists here; you can short‑circuit early to avoid counting these as “dropped” due to build failure.
func (t *BatchTxPool) submitSingleTransaction( @@ - flowTx, err := t.buildTransaction( - t.getLatestBlockHeader(), + hdr := t.getLatestBlockHeader() + if hdr == nil { + return fmt.Errorf("latest Flow block header unavailable") + } + flowTx, err := t.buildTransaction( + hdr, account, script, hexEncodedTx, coinbaseAddress, )Also applies to: 263-268
services/requester/single_tx_pool.go (4)
176-188: Add a defensive nil check for latestBlockHeaderAlthough you store an initial header, a simple guard prevents panics in edge cases (tests, partial construction, future refactors).
func (t *SingleTxPool) buildTransaction( latestBlockHeader *flow.BlockHeader, @@ ) (*flow.Transaction, error) { + if latestBlockHeader == nil { + return nil, fmt.Errorf("latest Flow block header unavailable") + } @@ flowTx := flow.NewTransaction(). SetScript(script). - SetReferenceBlockID(latestBlockHeader.ID). + SetReferenceBlockID(latestBlockHeader.ID).Also applies to: 112-120, 240-245
24-25: Make ticker frequency configurable (derive from expiry window)Hard-coding 5s works, but exposing it via config (or deriving from flow.DefaultTransactionExpiry) lets ops tune staleness vs. load.
219-238: Consider jitter/backoff on header update errorsOn repeated AN errors, the fixed 5s tick can spam logs. Add small jitter/backoff after failures to smooth retries.
37-40: Optional: use typed atomic.Pointer for stronger typingatomic.Value is fine; typed atomics (e.g., go.uber.org/atomic.Pointer[*flow.BlockHeader]) give compile‑time type safety and a helpful String() for safe logging.
Based on learnings
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
services/requester/batch_tx_pool.go(8 hunks)services/requester/single_tx_pool.go(8 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
services/requester/batch_tx_pool.go (1)
services/requester/single_tx_pool.go (1)
NewSingleTxPool(45-76)
services/requester/single_tx_pool.go (3)
services/requester/tx_pool.go (1)
TxPool(16-18)services/requester/cross-spork_client.go (1)
CrossSporkClient(108-113)config/config.go (1)
Config(43-124)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Test
🔇 Additional comments (2)
services/requester/single_tx_pool.go (1)
62-76: Good: initialize header atomically before starting trackerStoring the initial header via atomic before launching the goroutine removes races. Solid.
services/requester/batch_tx_pool.go (1)
60-76: Approve error propagation in BatchTxPool constructor
Errors returned by NewBatchTxPool are handled at all usage sites (including in bootstrap/bootstrap.go), ensuring initialization failures aren’t ignored.
|
Have we thought about having a subscription to the latest finalized block instead of polling it. That way you basically just updated once you get a new block through the subscription channel. |
@janezpodhostnik I have thought about that indeed. However, subscriptions come with their own complexities too. See for examples this error: *ingestion.Engine","error":"failure in event subscription with: recoverable: disconnected:
error receiving event: rpc error: code = Unavailable desc = error reading from server: EOFThis is an error that we receive as part of the EVM event subscriber, and it's not trivial to get them to run indefinitely. That's why I opted for a simple polling solution. |
services/requester/single_tx_pool.go
Outdated
| } | ||
|
|
||
| return latestBlock, account, nil | ||
| func (t *SingleTxPool) getLatestBlockHeader() *flow.BlockHeader { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd suggest framing this as getReferenceBlock(). that way future devs don't expect it to return the actual latest block, which this doesn't.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good suggestion, updated in 3f378f3 , along with all references which implied that we are dealing with the latest block instead, when this is updated every 15 seconds, and only supposed to be used as the reference block ID.
services/requester/single_tx_pool.go
Outdated
| "github.com/onflow/flow-evm-gateway/services/requester/keystore" | ||
| ) | ||
|
|
||
| var blockHeaderUpdateFrequency = time.Second * 5 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we could make it longer. how about 15 seconds? it just needs to be refreshed every 600 seconds, so it doesn't need to be too frequent
services/requester/single_tx_pool.go
Outdated
| latestBlock *flow.Block | ||
| account *flow.Account | ||
| ) | ||
| func (t *SingleTxPool) trackLatestBlockHeader(ctx context.Context) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
are there other systems that need the latest block? if so, it may make sense to use the block streaming endpoint to track the latest height locally.
If it's only to get the reference block, then I think it's fine to use polling with a medium size interval between requests. 15-30 seconds seems fine.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Other than building/signing Flow transactions, there's no other component that needs the latest block. That's why I went with a simple polling approach.
f447cc8 to
3f378f3
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
services/requester/single_tx_pool.go (1)
24-24: Consider making this unexported or documenting immutability.The variable
referenceBlockUpdateFrequencyis never modified and acts as a configuration constant. Since Go doesn't supportconstfortime.Duration, consider either:
- Making it unexported:
referenceBlockUpdateFrequency(lowercase)- Adding a comment that it should not be modified
This prevents accidental external modification and clarifies intent.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
bootstrap/bootstrap.go(3 hunks)services/requester/batch_tx_pool.go(8 hunks)services/requester/single_tx_pool.go(8 hunks)tests/helpers.go(1 hunks)
🧰 Additional context used
🧠 Learnings (4)
📚 Learning: 2025-06-17T10:29:35.941Z
Learnt from: m-Peter
Repo: onflow/flow-evm-gateway PR: 831
File: services/requester/pool.go:202-214
Timestamp: 2025-06-17T10:29:35.941Z
Learning: In Flow EVM Gateway's transaction pool (services/requester/pool.go), signing keys acquired with keystore.Take() are intentionally held until the transaction is sealed, not released immediately after sending. The key release happens asynchronously when sealed transactions are ingested via event_subscriber.go NotifyTransaction() -> keystore unsafeUnlockKey() -> key.Done(). This pattern ensures transaction integrity by preventing key reuse before transaction finalization.
Applied to files:
bootstrap/bootstrap.go
📚 Learning: 2025-06-19T11:36:25.478Z
Learnt from: m-Peter
Repo: onflow/flow-evm-gateway PR: 831
File: services/requester/batch_tx_pool.go:0-0
Timestamp: 2025-06-19T11:36:25.478Z
Learning: In Go, when copying maps that contain slices (like `map[gethCommon.Address][]pooledEvmTx`), perform deep copies by iterating over the map and copying each slice individually using `make()` and `copy()` to avoid shared references that could lead to race conditions and data corruption.
Applied to files:
services/requester/single_tx_pool.go
📚 Learning: 2024-10-18T19:26:37.579Z
Learnt from: peterargue
Repo: onflow/flow-evm-gateway PR: 617
File: api/stream.go:62-67
Timestamp: 2024-10-18T19:26:37.579Z
Learning: In the `flow-evm-gateway` project, within the Go file `api/stream.go`, the `prepareBlockResponse` method includes the Bloom filter as the field `LogsBloom` in the returned `Block` struct.
Applied to files:
services/requester/single_tx_pool.go
📚 Learning: 2025-01-24T20:15:10.908Z
Learnt from: peterargue
Repo: onflow/flow-evm-gateway PR: 682
File: services/ingestion/event_subscriber.go:85-90
Timestamp: 2025-01-24T20:15:10.908Z
Learning: When calculating blocks to catch up in Flow, initialize the counter to 0 and only set it when the latest chain height is greater than the current height (latestOnChainHeight > currentHeight). There's no need to handle the else case as 0 blocks to catch up is the correct default when we're at the same height or ahead.
Applied to files:
services/requester/single_tx_pool.go
🧬 Code graph analysis (3)
bootstrap/bootstrap.go (2)
services/requester/batch_tx_pool.go (1)
NewBatchTxPool(52-92)services/requester/single_tx_pool.go (1)
NewSingleTxPool(45-76)
services/requester/batch_tx_pool.go (1)
services/requester/single_tx_pool.go (1)
NewSingleTxPool(45-76)
services/requester/single_tx_pool.go (2)
services/requester/tx_pool.go (1)
TxPool(16-18)services/requester/cross-spork_client.go (1)
CrossSporkClient(108-113)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Test
🔇 Additional comments (11)
tests/helpers.go (1)
93-93: LGTM - Good use of SDK constant.Replacing the hardcoded value with
flow.DefaultTransactionExpiryimproves maintainability and ensures consistency with the Flow SDK's recommended transaction expiry settings.services/requester/batch_tx_pool.go (4)
60-91: LGTM - Proper error propagation added.The constructor now correctly returns errors from
NewSingleTxPool, enabling the bootstrap layer to halt startup if pool creation fails. This prevents the gateway from operating with an invalid transaction pool.
168-199: LGTM - Successfully eliminates per-batch block fetch.The refactor removes the redundant latest block query on each batch interval, relying instead on the periodically-refreshed reference header from
getReferenceBlock(). This reduces Access Node load as intended by the PR.
203-245: LGTM - Signature updated to use block header.The method now accepts
latestBlockHeaderinstead of the full block, which is more precise since only the header (specifically the ID) is needed for setting the reference block. This change is consistent with the overall refactoring.
247-281: LGTM - Consistent use of cached reference block.Like
processPooledTransactions, this method now uses the cached reference header, eliminating an unnecessary Access Node call per single-transaction submission. The refactor is consistent across both submission paths.bootstrap/bootstrap.go (1)
256-280: LGTM - Bootstrap now fails fast on pool creation errors.The error handling ensures that if the initial reference block header cannot be fetched during pool creation, the gateway startup is aborted rather than attempting to operate with an uninitialized pool. This is the correct behavior for this critical initialization step.
services/requester/single_tx_pool.go (5)
37-39: LGTM - Well-documented atomic field.The comment clearly explains the thread-safety rationale, making it easy for future maintainers to understand why
atomic.Valueis necessary here.
45-76: LGTM - Robust initialization with proper error handling.The constructor correctly:
- Fetches the initial reference block header before starting background updates
- Returns an error if the initial fetch fails, preventing pool creation with invalid state
- Atomically stores the header before starting the updater goroutine
- Passes context for graceful shutdown
This ensures the pool is never in an inconsistent state.
112-124: LGTM - Per-transaction AN calls reduced as intended.This change achieves the core PR objective: the gateway no longer fetches the latest block for every transaction submission. Instead, it uses the cached reference header (refreshed every 15 seconds in the background), significantly reducing Access Node load under high throughput.
219-238: LGTM - Reliable background updater with proper cleanup.The updater goroutine correctly:
- Uses a ticker with appropriate cleanup
- Respects context cancellation for graceful shutdown
- Logs errors but continues retrying (maintaining availability)
- Atomically stores the updated header
The 15-second update frequency provides a good balance between AN load and freshness, with ample margin within the ~600-second transaction expiry window.
240-245: LGTM - Correct atomic accessor pattern.The getter properly loads and type-asserts the atomic value. While it can return
nil, this is safe in practice because:
- The constructor fails if the initial header fetch fails
- The header is atomically stored before the pool is returned to callers
- No transactions can be submitted until
NewSingleTxPoolcompletes successfullyThe current implementation is sound.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
services/requester/single_tx_pool.go (1)
119-119: Add defensive nil check for reference block.While unlikely in practice (since the constructor ensures initialization),
getReferenceBlock()can returnnil. Passing this unchecked tobuildTransactionwould cause a nil pointer dereference at line 187 when accessinglatestBlockHeader.ID.Apply this diff to add a defensive check:
+ referenceBlock := t.getReferenceBlock() + if referenceBlock == nil { + return fmt.Errorf("reference block not yet available") + } + flowTx, err := t.buildTransaction( - t.getReferenceBlock(), + referenceBlock, account, script,
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
services/requester/single_tx_pool.go(8 hunks)
🧰 Additional context used
🧠 Learnings (4)
📚 Learning: 2025-06-19T11:36:25.478Z
Learnt from: m-Peter
Repo: onflow/flow-evm-gateway PR: 831
File: services/requester/batch_tx_pool.go:0-0
Timestamp: 2025-06-19T11:36:25.478Z
Learning: In Go, when copying maps that contain slices (like `map[gethCommon.Address][]pooledEvmTx`), perform deep copies by iterating over the map and copying each slice individually using `make()` and `copy()` to avoid shared references that could lead to race conditions and data corruption.
Applied to files:
services/requester/single_tx_pool.go
📚 Learning: 2024-10-18T19:26:37.579Z
Learnt from: peterargue
Repo: onflow/flow-evm-gateway PR: 617
File: api/stream.go:62-67
Timestamp: 2024-10-18T19:26:37.579Z
Learning: In the `flow-evm-gateway` project, within the Go file `api/stream.go`, the `prepareBlockResponse` method includes the Bloom filter as the field `LogsBloom` in the returned `Block` struct.
Applied to files:
services/requester/single_tx_pool.go
📚 Learning: 2025-01-24T20:15:10.908Z
Learnt from: peterargue
Repo: onflow/flow-evm-gateway PR: 682
File: services/ingestion/event_subscriber.go:85-90
Timestamp: 2025-01-24T20:15:10.908Z
Learning: When calculating blocks to catch up in Flow, initialize the counter to 0 and only set it when the latest chain height is greater than the current height (latestOnChainHeight > currentHeight). There's no need to handle the else case as 0 blocks to catch up is the correct default when we're at the same height or ahead.
Applied to files:
services/requester/single_tx_pool.go
📚 Learning: 2025-10-06T10:14:49.706Z
Learnt from: m-Peter
Repo: onflow/flow-evm-gateway PR: 890
File: services/ingestion/event_subscriber.go:235-239
Timestamp: 2025-10-06T10:14:49.706Z
Learning: In services/ingestion/event_subscriber.go, when reconnecting after disconnect errors (DeadlineExceeded, Internal, Unavailable), the subscription should reconnect at lastReceivedHeight rather than lastReceivedHeight+1. This avoids errors when the next height doesn't exist yet, and duplicate event processing is safe because the ingestion engine is explicitly designed to be idempotent (storage uses batch.Set() which overwrites existing entries).
Applied to files:
services/requester/single_tx_pool.go
🧬 Code graph analysis (1)
services/requester/single_tx_pool.go (4)
services/requester/tx_pool.go (1)
TxPool(16-18)services/requester/cross-spork_client.go (1)
CrossSporkClient(108-113)config/config.go (1)
Config(44-127)services/requester/keystore/key_store.go (1)
KeyStore(39-52)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Test
🔇 Additional comments (6)
services/requester/single_tx_pool.go (6)
8-8: LGTM: Atomic import and update frequency.The
sync/atomicimport supports the thread-safe header storage, and the 15-second update frequency is appropriate given the 600-second (10-minute) expiry window for Flow transactions.Also applies to: 24-24
37-39: LGTM: Atomic field declaration.The
atomic.Valuefield with explanatory comment correctly addresses the race condition between the ticker updates and concurrent transaction submissions.
45-76: LGTM: Constructor initialization.The constructor correctly:
- Fetches the initial reference block header with error handling
- Atomically stores it before launching the background updater
- Returns an error to abort startup if the initial fetch fails
This ensures the pool never operates without a valid reference header.
175-217: LGTM: Signature change to header.The updated signature using
*flow.BlockHeaderinstead of*flow.Blockis appropriate since only the header fields (ID and Height) are needed, reducing memory overhead.Note: This function assumes a non-nil header; nil safety depends on the caller (see comment on line 119).
219-238: LGTM: Background reference block updater.The implementation correctly:
- Uses a ticker with the agreed-upon 15-second interval
- Respects context cancellation for clean shutdown
- Handles errors by logging and continuing (next attempt in 15 seconds)
- Stores the updated header atomically
The simple error handling without backoff is acceptable for this polling interval.
240-245: LGTM: Reference block accessor.The accessor correctly performs an atomic load with a nil check and safe type assertion. The nil check is good defensive programming, even though it shouldn't occur after proper initialization.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
services/requester/single_tx_pool.go (1)
219-245: LGTM: Background updater implementationThe polling mechanism correctly refreshes the reference block header every 15 seconds, respects context cancellation, and uses atomic storage. The
getReferenceBlock()accessor safely loads the header with a defensive nil check (though the value is guaranteed non-nil after successful initialization).Optional: Consider exponential backoff on repeated errors
If
GetLatestBlockHeaderencounters persistent failures (e.g., during Access Node issues), the current implementation will log an error every 15 seconds. As noted in a previous review comment, introducing exponential backoff with jitter on repeated failures could reduce log spam and Access Node load during outages. This is purely optional and can be deferred.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
services/requester/batch_tx_pool.go(8 hunks)services/requester/single_tx_pool.go(8 hunks)
🧰 Additional context used
🧠 Learnings (4)
📚 Learning: 2025-06-19T11:36:25.478Z
Learnt from: m-Peter
Repo: onflow/flow-evm-gateway PR: 831
File: services/requester/batch_tx_pool.go:0-0
Timestamp: 2025-06-19T11:36:25.478Z
Learning: In Go, when copying maps that contain slices (like `map[gethCommon.Address][]pooledEvmTx`), perform deep copies by iterating over the map and copying each slice individually using `make()` and `copy()` to avoid shared references that could lead to race conditions and data corruption.
Applied to files:
services/requester/single_tx_pool.go
📚 Learning: 2024-10-18T19:26:37.579Z
Learnt from: peterargue
Repo: onflow/flow-evm-gateway PR: 617
File: api/stream.go:62-67
Timestamp: 2024-10-18T19:26:37.579Z
Learning: In the `flow-evm-gateway` project, within the Go file `api/stream.go`, the `prepareBlockResponse` method includes the Bloom filter as the field `LogsBloom` in the returned `Block` struct.
Applied to files:
services/requester/single_tx_pool.go
📚 Learning: 2025-01-24T20:15:10.908Z
Learnt from: peterargue
Repo: onflow/flow-evm-gateway PR: 682
File: services/ingestion/event_subscriber.go:85-90
Timestamp: 2025-01-24T20:15:10.908Z
Learning: When calculating blocks to catch up in Flow, initialize the counter to 0 and only set it when the latest chain height is greater than the current height (latestOnChainHeight > currentHeight). There's no need to handle the else case as 0 blocks to catch up is the correct default when we're at the same height or ahead.
Applied to files:
services/requester/single_tx_pool.go
📚 Learning: 2025-10-06T10:14:49.706Z
Learnt from: m-Peter
Repo: onflow/flow-evm-gateway PR: 890
File: services/ingestion/event_subscriber.go:235-239
Timestamp: 2025-10-06T10:14:49.706Z
Learning: In services/ingestion/event_subscriber.go, when reconnecting after disconnect errors (DeadlineExceeded, Internal, Unavailable), the subscription should reconnect at lastReceivedHeight rather than lastReceivedHeight+1. This avoids errors when the next height doesn't exist yet, and duplicate event processing is safe because the ingestion engine is explicitly designed to be idempotent (storage uses batch.Set() which overwrites existing entries).
Applied to files:
services/requester/single_tx_pool.go
🧬 Code graph analysis (2)
services/requester/batch_tx_pool.go (1)
services/requester/single_tx_pool.go (1)
NewSingleTxPool(45-76)
services/requester/single_tx_pool.go (5)
services/requester/tx_pool.go (1)
TxPool(16-18)services/requester/cross-spork_client.go (1)
CrossSporkClient(108-113)config/config.go (1)
Config(44-127)metrics/collector.go (1)
Collector(108-123)services/requester/keystore/key_store.go (1)
KeyStore(39-52)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Test
🔇 Additional comments (8)
services/requester/batch_tx_pool.go (4)
60-60: LGTM: Constructor error propagationThe updated constructor signature properly propagates initialization errors from
NewSingleTxPool, enabling fail-fast behavior during bootstrap when the Access Node is unavailable or the initial block header fetch fails.Also applies to: 64-75, 91-91
168-174: LGTM: Removed per-batch block fetchThe batch processing now retrieves only the COA account per batch interval and uses the continuously tracked reference block header via
getReferenceBlock(). This change successfully reduces Access Node calls during transaction submission, directly addressing the PR objectives.Also applies to: 187-187
205-205: LGTM: Signature update aligns with new reference block approachThe parameter type change from
*flow.Blockto*flow.BlockHeadercorrectly reflects that only the block header is needed for setting the reference block ID, and aligns with the updatedbuildTransactionsignature inSingleTxPool.Also applies to: 227-227
251-254: LGTM: Single transaction submission updated consistentlyThe single transaction path mirrors the batch processing changes—fetching only the COA account and using the tracked reference block. This ensures consistent behavior across both submission paths.
Also applies to: 263-263
services/requester/single_tx_pool.go (4)
24-24: LGTM: Reference block tracking setupThe 15-second update frequency provides a good balance—frequent enough to stay well within the 600-second transaction expiry window while avoiding excessive Access Node load. The atomic storage with clear documentation properly addresses the race condition between the background updater and concurrent transaction submissions.
Also applies to: 37-39
45-76: LGTM: Constructor initialization sequenceThe initialization properly sequences the critical steps: fetch the initial finalized block header, store it atomically, then start the background updater. This ensures
getReferenceBlock()always returns a valid header after successful construction, and the error return allows bootstrap to abort if the initial fetch fails.
112-124: LGTM: Eliminated per-submission block fetchThis change directly implements the PR objective by replacing the per-transaction latest-block query with a reference to the continuously tracked block header. This reduces Access Node calls from two per submission to one (COA account only), with the account fetch planned for removal in future work.
175-217: LGTM: Transaction building updated correctlyThe
buildTransactionmethod now accepts*flow.BlockHeaderinstead of*flow.Block, correctly using the header's ID for the reference block and its height for lock metadata. The simpler type requirement aligns well with the tracked reference block approach.
8081978 to
1143ff4
Compare
Closes: #895
Description
For every tx submission, we used to fetch the latest block, as well as the COA account.
This meant that we needed 2 AN calls for each tx submission.
Now we have a ticker which tracks the latest finalized block header, which is mainly used for setting the proper Reference Block ID, when preparing the Cadence tx for submission.
Later on, we plan to remove the COA account fetching as well.
For contributor use:
masterbranchFiles changedin the Github PR explorerSummary by CodeRabbit
Bug Fixes
New Features
Refactor
Tests