Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Discuss] consider pausing background task if no pending txs #1318

Open
jgalat opened this issue Sep 20, 2024 · 8 comments
Open

[Discuss] consider pausing background task if no pending txs #1318

jgalat opened this issue Sep 20, 2024 · 8 comments
Labels
bug Something isn't working c-provider c-pubsub Pertaining to the pubsub crate

Comments

@jgalat
Copy link

jgalat commented Sep 20, 2024

Component

provider, pubsub

What version of Alloy are you on?

0.3.6

Operating System

Linux

Describe the bug

I noticed my RPC provider getting spammed with eth_blockNumber and eth_getBlockByNumber calls after sending a tx.
In my use case I am not even watching pending transactions, but still it starts to spam and doesn't stop unless I restart all over.

Here is a quick reproduction video and a repository.
The repo uses 0.3.6, but I tested also with 0.2.1 and 0.1.4 and all behave the same.
In this scenario I am watching and confirming that the transaction went through.

https://github.com/jgalat/alloy-repro-0

cut.mp4
@jgalat jgalat added the bug Something isn't working label Sep 20, 2024
@nhtyy
Copy link

nhtyy commented Sep 20, 2024

there is a check for is_local, maybe there could be a setter at the provider level but i don't think this is a bug

@jgalat
Copy link
Author

jgalat commented Sep 20, 2024

@nhtyy Thanks for answering, but why is it pinging?
Some RPC providers charge credits for this call, e.g. Alchemy uses 10 CU.

Also I see that this doesn't start unless I send a signed tx. eth_calls or eth_subscribe methods don't trigger the spam.

@DaniPopes
Copy link
Member

This is the intended behavior. It's the background task that follows the chain for handling pending transactions. It's started when the first transaction is sent and closed when the last provider is dropped. I guess it could also be paused if there are no pending transactions? cc @klkvr @mattsse

@jgalat
Copy link
Author

jgalat commented Sep 20, 2024

Thanks for taking a look @DaniPopes. I've been trying to find an alternative to at least increase the polling interval after reading nhtyy's comment, but seems to be impossible. Here's what I've been trying

    let ws = WsConnect::new("ws://localhost:8545");
    let transport = ws.into_service().await?;
    let client = ClientBuilder::default()
        .transport(transport, false)
        .with_poll_interval(std::time::Duration::from_secs(3_600));

    client
        .inner()
        .set_poll_interval(std::time::Duration::from_secs(3_600));

    let provider = ProviderBuilder::new()
        .with_recommended_fillers()
        .with_chain(NamedChain::Optimism)
        .on_client(client);

Neither attempt works. At least passing is_local = false to .transport increased the interval, but not to 7 seconds like it shows here, but rather to 1 second.

@DaniPopes
Copy link
Member

I can't reproduce, I can set the interval in your repro and it works:

diff --git a/src/main.rs b/src/main.rs
index 3339d62..7113597 100644
--- a/src/main.rs
+++ b/src/main.rs
@@ -5,6 +5,7 @@ use alloy::{
     rpc::types::TransactionRequest,
 };
 use eyre::Result;
+use std::time::Duration;
 
 #[tokio::main]
 async fn main() -> Result<()> {
@@ -13,6 +14,7 @@ async fn main() -> Result<()> {
         .with_recommended_fillers()
         .on_ws(ws)
         .await?;
+    provider.client().set_poll_interval(Duration::from_secs(2));
 
     let wallet = provider.get_accounts().await?[0];
 

It won't work if you set it after sending the first transaction since it's loaded at the beginning then stays like that until shutdown

@Alexangelj
Copy link

This is the intended behavior. It's the background task that follows the chain for handling pending transactions. It's started when the first transaction is sent and closed when the last provider is dropped. I guess it could also be paused if there are no pending transactions? cc @klkvr @mattsse

That makes sense (didn't expect that tho), so after the first transaction is sent, this polling starts and will handle getting the receipts for any future transactions? And the provider needs to be completely dropped/reset to stop the polling?

How I expected it to work was to poll until we get the transaction receipt (at least, for get_receipt), then stop polling until a new transaction is being waited for. Being able to pause as you suggested would be good - where would this happen? If you point me to the place you think this logic should go I can try a pr

@yash-atreya
Copy link
Member

Changed the title as per #1318 (comment).

In the meantime, setting is_local = false OR setting the poll interval explicitly #1318 (comment) fixes this

@yash-atreya yash-atreya changed the title [Bug] Provider keeps spamming node after TX confirmed [Bug] consider pausing background task if no pending txs Feb 20, 2025
@yash-atreya yash-atreya added the discuss needs discussion label Feb 20, 2025
@yash-atreya yash-atreya changed the title [Bug] consider pausing background task if no pending txs [Discuss] consider pausing background task if no pending txs Feb 20, 2025
@yash-atreya yash-atreya added c-provider c-pubsub Pertaining to the pubsub crate labels Feb 20, 2025
@jenpaff jenpaff added this to Alloy Feb 28, 2025
@jenpaff jenpaff moved this to Todo in Alloy Feb 28, 2025
@jenpaff jenpaff removed the discuss needs discussion label Feb 28, 2025
@scolear
Copy link

scolear commented Mar 5, 2025

Noticed this too today on increased Alchemy usage.

So just to help us decide on our best course of action: will this behavior be changed in the near future and if so, ETA? :)

For now we see two options as workarounds (we have a couple transactions/day on this service, otherwise it should be sleeping):

  1. increase set_poll_interval a lot, like 60+ sec. It will still keep pinging, but a lot less. Not sure yet how this would affect a production network, seems to work fine on local
  2. Move the provider to be a local variable for every method, so it would be recreated and destroyed after every function call (suboptimal)

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working c-provider c-pubsub Pertaining to the pubsub crate
Projects
Status: Todo
Development

No branches or pull requests

7 participants