Skip to content

gossip backpressure leads to unresponsive states and / or ∞ discovery cycle #47

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
adiibanez opened this issue Mar 25, 2025 · 13 comments

Comments

@adiibanez
Copy link

@b5 As discussed here are a bunch of logs.

The test can be configured with whatever settings required. 100 nodes work just fine, 10, 5 not so much.
https://github.com/adiibanez/iroh_ex/blob/main/test/iroh_ex_test.exs#L6

This is quite an artificial test not really reflecting what I need iroh to do.
Nevertheless I would be interested in understanding the constraints when operating with small swarm sizes and lots of small messages. Eg. a bunch of high sampling rate sensors.

iroh_backpressure_disco.zip

@n0bot n0bot bot added this to iroh Mar 25, 2025
@adiibanez
Copy link
Author

adiibanez commented Mar 25, 2025

Have been playing around a little more with iroh-gossip trying to get an understanding of the apis and it's capabilities / limitations and have a question regarding the gossip messages.

I see that the little swarm of 20 nodes arranges neatly into clusters. haven't figured out yet how to split "real connections" from "passive" address book entries.

But my question is about the messages. There should be 10 messages received by every single node and I see quite a variation in numbers of messages and specially the ordering of the messages received.

When I send all messages from a single node ( the one that subscribed to the topic / created the ticket ) things look a bit more homogenous but still, some nodes receive only very few or no messages at all.

When I send the messages from randomly selected nodes things look very different from node to node in terms of numbers and ordering.

The data is based on discovery_stream and the gossip receiver event loop of every node.
Might very well still have some issues in the rust code but the different behaviour catched my attention.

Are there any examples implemented in rust only that replicate such scenarios with a bunch of nodes and gossip messages being sent around?

nodes_peers_messages_logs.txt

@flub
Copy link
Contributor

flub commented Mar 26, 2025

Hi, do you mind describing in a bit more detail what happens? How do you observe "10, 5 not so much"? What are the symptoms, what behaviour do you expect. Also what is your network topology? Please describe your test and observed problems in as much detail as possible so we know what to look for.

@adiibanez
Copy link
Author

adiibanez commented Mar 26, 2025

Hi flub.

I'm not really sure what is happening but it's basically an "uncontrolled" discovery cycle after N message with many magic disco events ( I think the logs are the best description for the behaviour ). Sometimes that loop blocks the terminal too ( happened 1-2 times ).

what I expect in particular when starting all nodes on a "trivial" loopback network interface is a "constrained" number of discovery events ( should somehow correlate with the number of nodes ).
Otherwise no big expectations: start nodes, form swarm, send messages from single but also multiple nodes, receive the very same number of messages, if possible ordered to some degree. let me hook into async event stream so that I can report back the event as message to the elixir part of the system.

All nodes are started on the same network resp. same machine ( haven't ventured into any distributed setups yet ).
It is an ASUS router wifi, fiber optic uplink but I see same behaviour when using a mobile hotspot or also when running the same tests on github runners.

Did a little test setup on github. I can adjust this to whatever makes sense to test. I have very little detailed knowledge about iroh so not really sure what makes sense to test in detail. https://github.com/adiibanez/iroh_ex/actions/runs/14074228206/job/39414287222.
Can also remove some of the parts of my code that might be triggering the problems.

I see similar behaviour if I change the code to the commented version below:

I'm starting to suspect that there is something going on related to tokio?
I now remember that somewhere in a iroh gossip example both tokio and std thread are used? I think std thread was for the sdtin input loop? with a comment saying why, something like "because things ...". can't find it anymore.


pub static RUNTIME: Lazy<Runtime> =
    Lazy::new(|| tokio::runtime::Runtime::new().expect("Failed to create Tokio runtime"));


    runs without "disco discovery cycle", but no calls to msg_env.send_and_clear(&pid, |env| {

    RUNTIME.spawn(async move {
        // let mut names = HashMap::new();
        let mut receiver = mpsc_event_receiver_clone.write().await;
        let mut msg_env = OwnedEnv::new();

        while let Some(event) = receiver.recv().await {
            tracing::debug!("📩 Received event: {:?}", event);

            if let Event::Gossip(GossipEvent::Received(msg)) = event {
                match Message::from_bytes(&msg.content) {
                    Ok(message) => {
                        match message {
                            Message::AboutMe { from, name } => {
                                // names.insert(from, name.clone());
                                tracing::info!("💬 FROM: {} MSG: {}", from.fmt_short(), name);

                                if let Err(e) = msg_env.send_and_clear(&pid, |env| {
                                    (
                                        atoms::iroh_gossip_message_received(),
                                        node_id_short.clone(),
                                        name.clone(),
                                    )
                                        .encode(env)
                                }) {
                                    tracing::debug!("⚠️ Failed to send discovery message: {:?}", e);
                                }
                            }
                            Message::Message { from, text } => {
                                // let name = names.get(&from).map_or_else(|| from.fmt_short(), String::to_string);
                                // tracing::info!("📝 {}: {}", name, text);
                                tracing::info!("📝 {}: {}", from, text);
                            }
                        }
                    }
                    Err(e) => {
                        tracing::error!("❌ Failed to parse message: {:?}", e);
                    }
                }
            }
        }
    });



runs with "disco discovery cycle" for any number of nodes ( 5, 10, 100, ... ). configuration doesn't matter. 
also Joined / NeighborUp and down never triggers ( might be using it wrong ) 

// 🔄 **Message Processing Loop**
    /*RUNTIME.spawn(async move {
        let mut msg_env = OwnedEnv::new();
        let mut receiver = mpsc_event_receiver.write().await;

        while let Some(event) = receiver.recv().await {
            tracing::debug!("📩 Received event: {:?}", event);

            match event {
                Event::Gossip(GossipEvent::Joined(keys)) => {
                    if let Err(e) = msg_env.send_and_clear(&pid, |env| {
                        (
                            atoms::iroh_gossip_joined(),
                            node_id_short.clone(),
                            keys.len(),
                        )
                            .encode(env)
                    }) {
                        tracing::debug!("⚠️ Failed to send neighbor up message: {:?}", e);
                    }
                    tracing::info!("Neighbor up: {:?}", keys);
                }
                Event::Gossip(GossipEvent::NeighborUp(key)) => {
                    if let Err(e) = msg_env.send_and_clear(&pid, |env| {
                        (
                            atoms::iroh_gossip_neighbor_up(),
                            node_id_short.clone(),
                            key.fmt_short(),
                        )
                            .encode(env)
                    }) {
                        tracing::debug!("⚠️ Failed to send neighbor up message: {:?}", e);
                    }
                    tracing::info!("Neighbor up: {}", key);
                }

                Event::Gossip(GossipEvent::NeighborDown(key)) => {
                    if let Err(e) = msg_env.send_and_clear(&pid, |env| {
                        (
                            atoms::iroh_gossip_neighbor_down(),
                            node_id_short.clone(),
                            key.fmt_short(),
                        )
                            .encode(env)
                    }) {
                        tracing::debug!("⚠️ Failed to send neighbor down message: {:?}", e);
                    }
                    tracing::info!("Neighbor down: {}", key);
                }
                Event::Gossip(GossipEvent::Received(msg)) => {
                    match Message::from_bytes(&msg.content) {
                        Ok(Message::AboutMe { from, name }) => {
                            if let Err(e) = msg_env.send_and_clear(&pid, |env| {
                                (
                                    atoms::iroh_gossip_message_received(),
                                    node_id_short.clone(),
                                    name.clone(),
                                )
                                    .encode(env)
                            }) {
                                tracing::debug!("⚠️ Failed to send discovery message: {:?}", e);
                            }
                            tracing::info!("💬 FROM: {} MSG: {}", from.fmt_short(), name);
                        }

                        Ok(Message::Message { from, text }) => {
                            tracing::info!("📝 {}: {}", from, text);
                        }

                        Err(e) => {
                            tracing::error!("❌ Failed to parse message: {:?}", e);
                        }
                    }
                }

                _ => {
                    tracing::debug!("🔍 Ignored unhandled event: {:?}", event);
                }
            }
        }
    });*/

@flub
Copy link
Contributor

flub commented Mar 26, 2025

I'm not really sure what is happening but it's basically an "uncontrolled" discovery cycle after N message with many magic disco events ( I think the logs are the best description for the behaviour ). Sometimes that loop blocks the terminal too ( happened 1-2 times ).

Could you point to line numbers in the logs you posted for the "uncontrolled discovery cycle"?

@adiibanez
Copy link
Author

@flub To summarize:

this is supposed to be 100k messages.

2025-03-26T10:20:37.301664Z  INFO iroh_ex: 💬 FROM: b8b5230cc3 MSG: MSG:41927
2025-03-26T10:20:37.301669Z  INFO iroh_ex: 💬 FROM: b8b5230cc3 MSG: MSG:41955
2025-03-26T10:20:37.301675Z  INFO iroh_ex: 💬 FROM: b8b5230cc3 MSG: MSG:41783
2025-03-26T10:20:37.301681Z  INFO iroh_ex: 💬 FROM: b8b5230cc3 MSG: MSG:41823
2025-03-26T10:20:37.301686Z  INFO iroh_ex: 💬 FROM: b8b5230cc3 MSG: MSG:41850
2025-03-26T10:20:37.301694Z  INFO iroh_ex: 💬 FROM: b8b5230cc3 MSG: MSG:42031
2025-03-26T10:20:37.301699Z  INFO iroh_ex: 💬 FROM: b8b5230cc3 MSG: MSG:42022
2025-03-26T10:20:37.301704Z  INFO iroh_ex: 💬 FROM: b8b5230cc3 MSG: MSG:41915
2025-03-26T10:20:37.301710Z  INFO iroh_ex: 💬 FROM: b8b5230cc3 MSG: MSG:41920
2025-03-26T10:20:37.301737Z  INFO iroh_ex: 💬 FROM: b8b5230cc3 MSG: MSG:41899
2025-03-26T10:20:37.301743Z  INFO iroh_ex: 💬 FROM: b8b5230cc3 MSG: MSG:41923
2025-03-26T10:20:37.301749Z  INFO iroh_ex: 💬 FROM: b8b5230cc3 MSG: MSG:41895
2025-03-26T10:20:37.301754Z  INFO iroh_ex: 💬 FROM: b8b5230cc3 MSG: MSG:42119
2025-03-26T10:20:37.301759Z  INFO iroh_ex: 💬 FROM: b8b5230cc3 MSG: MSG:41984
2025-03-26T10:20:37.301764Z  INFO iroh_ex: 💬 FROM: b8b5230cc3 MSG: MSG:42004
2025-03-26T10:20:37.301770Z  INFO iroh_ex: 💬 FROM: b8b5230cc3 MSG: MSG:42003
2025-03-26T10:20:37.301775Z  INFO iroh_ex: 💬 FROM: b8b5230cc3 MSG: MSG:41816
2025-03-26T10:20:39.169913Z  WARN ep{me=f2b8b196c0}:magicsock:poll_recv:disco_in{node=02eac76f11 src=Udp(192.168.64.1:57639)}:handle_pong{m=Pong { tx_id: TransactionId(0x348F8E47B9FDE6CBDE5607AC), ping_observed_addr: Udp(192.168.64.1:59074) } src=Udp(192.168.64.1:57639)}: iroh::magicsock::node_map::node_state: received pong with unknown transaction id tx=348f8e47b9fde6cbde5607ac
2025-03-26T10:20:43.620743Z  INFO connect{me="f2b8b196c0" alpn="/iroh-gossip/0" remote="02eac76f11"}:try_send:get_send_addrs{node=02eac76f11}: iroh::magicsock::node_map::node_state: new connection type typ=mixed(udp: 192.168.64.1:57639, relay: https://euw1-1.relay.iroh.network./)
2025-03-26T10:20:43.623089Z  INFO ep{me=f2b8b196c0}:magicsock:poll_recv:disco_in{node=02eac76f11 src=Udp(192.168.33.115:57639)}:handle_pong{m=Pong { tx_id: TransactionId(0xE52D38E76EE14AAEF8E0705A), ping_observed_addr: Udp(192.168.33.115:59074) } src=Udp(192.168.33.115:57639)}: iroh::magicsock::node_map::best_addr: selecting new direct path for node addr=192.168.33.115:57639 latency=969.375µs trust_for=6.499986292s
2025-03-26T10:20:43.700950Z  INFO ep{me=02eac76f11}:magicsock:poll_recv:disco_in{node=f2b8b196c0 src=Udp(192.168.33.115:59074)}:handle_pong{m=Pong { tx_id: TransactionId(0x9EF0A22FD7B9AA5D512FBDC4), ping_observed_addr: Udp(192.168.33.115:57639) } src=Udp(192.168.33.115:59074)}: iroh::magicsock::node_map::best_addr: selecting new direct path for node addr=192.168.33.115:59074 latency=681.417µs trust_for=6.499995334s
2025-03-26T10:20:44.350378Z  INFO router.accept:try_send:get_send_addrs{node=f2b8b196c0}: iroh::magicsock::node_map::node_state: new connection type typ=direct(192.168.33.115:59074)
2025-03-26T10:20:44.622148Z  INFO connect{me="f2b8b196c0" alpn="/iroh-gossip/0" remote="02eac76f11"}:try_send:get_send_addrs{node=02eac76f11}: iroh::magicsock::node_map::node_state: new connection type typ=direct(192.168.33.115:57639)
2025-03-26T10:20:56.043344Z  INFO ep{me=02eac76f11}:magicsock:actor: iroh::magicsock: home is now relay https://euw1-1.relay.iroh.network./, was Some(RelayUrl("https://use1-1.relay.iroh.network./"))
2025-03-26T10:20:56.240799Z  INFO ep{me=3e5d3a6a58}:magicsock:actor: iroh::magicsock: home is now relay https://euw1-1.relay.iroh.network./, was Some(RelayUrl("https://use1-1.relay.iroh.network./"))
2025-03-26T10:21:00.744229Z  INFO ep{me=39494967ff}:magicsock:actor: iroh::magicsock: home is now relay https://euw1-1.relay.iroh.network./, was Some(RelayUrl("https://use1-1.relay.iroh.network./"))
2025-03-26T10:21:04.171326Z  WARN ep{me=f2b8b196c0}:magicsock:poll_recv:disco_in{node=374f4efb50 src=Udp(192.168.64.1:59871)}:handle_pong{m=Pong { tx_id: TransactionId(0xAFDCF0DA50163CE8EAF44812), ping_observed_addr: Udp(192.168.64.1:59074) } src=Udp(192.168.64.1:59871)}: iroh::magicsock::node_map::node_state: received pong with unknown transaction id tx=afdcf0da50163ce8eaf44812
2025-03-26T10:21:05.940996Z  WARN router.accept: iroh::protocol: Handling incoming connecting ended with error: timed out
2025-03-26T10:21:05.974000Z  WARN router.accept: iroh::protocol: Handling incoming connecting ended with error: timed out
2025-03-26T10:21:06.014702Z  WARN router.accept: iroh::protocol: Handling incoming connecting ended with error: timed out
2025-03-26T10:21:06.018138Z  WARN router.accept: iroh::protocol: Handling incoming connecting ended with error: timed out
2025-03-26T10:21:09.170251Z  INFO ep{me=f2b8b196c0}:magicsock:poll_recv:disco_in{node=374f4efb50 src=Udp(192.168.33.115:59871)}:handle_pong{m=Pong { tx_id: TransactionId(0xE4AAE18670142194415087CF), ping_observed_addr: Udp(192.168.33.115:59074) } src=Udp(192.168.33.115:59871)}: iroh::magicsock::node_map::best_addr: selecting new direct path for node addr=192.168.33.115:59871 latency=1.494583ms trust_for=6.499998375s
2025-03-26T10:21:09.373418Z  INFO ep{me=374f4efb50}:magicsock:poll_recv:disco_in{node=f2b8b196c0 src=Udp(192.168.64.1:59074)}:handle_pong{m=Pong { tx_id: TransactionId(0xA96BD857C6C2E23EE400A870), ping_observed_addr: Udp(192.168.64.1:59871) } src=Udp(192.168.64.1:59074)}: iroh::magicsock::node_map::best_addr: selecting new direct path for node addr=192.168.64.1:59074 latency=1.158125ms trust_for=6.499997541s
2025-03-26T10:21:15.960413Z  WARN ep{me=f2b8b196c0}:magicsock:net_report.actor:reportgen.actor:probe: iroh_net_report::reportgen: probe failed: io error: No route to host (os error 65): No route to host (os error 65) probe=IcmpV6 { delay: 222.3358ms, node: RelayNode { url: RelayUrl("https://aps1-1.relay.iroh.network./"), stun_only: false, stun_port: 3478, quic: Some(RelayQuicConfig { port: 7842 }) } }
2025-03-26T10:21:15.960937Z  WARN ep{me=f2b8b196c0}:magicsock:net_report.actor:reportgen.actor:probe: iroh_net_report::reportgen: probe failed: io error: No route to host (os error 65): No route to host (os error 65) probe=IcmpV6 { delay: 222.3358ms, node: RelayNode { url: RelayUrl("https://use1-1.relay.iroh.network./"), stun_only: false, stun_port: 3478, quic: Some(RelayQuicConfig { port: 7842 }) } }
^C^C%

infinity "something loop" after that last message.

@adiibanez
Copy link
Author

@flub

Line numbers:

iroh_test_5_nodes_100k_messages.log:305046:2025-03-25T16:39:08.690946Z INFO iroh_ex: 💬 FROM: a9cc5ffeed MSG: 40074

iroh_test_10_nodes_100k_messages.log:438808:2025-03-25T16:41:21.578386Z INFO iroh_ex: 💬 FROM: e542d760bb MSG: 2886

Let me know if I can provide more specific logs / tests

@flub
Copy link
Contributor

flub commented Mar 26, 2025

@flub

Line numbers:

iroh_test_5_nodes_100k_messages.log:305046:2025-03-25T16:39:08.690946Z INFO iroh_ex: 💬 FROM: a9cc5ffeed MSG: 40074

Apologies, I'm not sure I'm understanding your questions correctly. I assume you mean that after this line an "uncontrolled discovery cycle" is happening? But I don't see such a thing. Connections are bing kept alive between 5 nodes, sure. But time is ticking along nicely and connections look healthy to me.

@adiibanez
Copy link
Author

I'm not really familiar with the standard behaviours and what is considered normal lifecycle of endpoint / node / connection, ...

But I can tell with some degree of confidence that what I'm seeing is very likely not supposed to be happening.

Are we on the same page that 100k messages are supposed to be delivered to all peers in the gossip topic? Not only 41k messages or whatever the logs show.
Also there shouldn't be major "discovery" activity going on ( as from scratch assigning new ports, ... ) after the initial swarm forming phase? In particular not while the messages are being sent / delivered?

I currently even wait for a few seconds between connecting nodes and sending out any messages. also wait another few seconds after all the messages have been sent.

As mentioned happy to adjust the tests to whatever makes sense or helps identifiying the root of that behavior.
....

2025-03-26T10:20:37.301704Z INFO iroh_ex: 💬 FROM: b8b5230cc3 MSG: MSG:41915
2025-03-26T10:20:37.301710Z INFO iroh_ex: 💬 FROM: b8b5230cc3 MSG: MSG:41920
2025-03-26T10:20:37.301737Z INFO iroh_ex: 💬 FROM: b8b5230cc3 MSG: MSG:41899
2025-03-26T10:20:37.301743Z INFO iroh_ex: 💬 FROM: b8b5230cc3 MSG: MSG:41923
2025-03-26T10:20:37.301749Z INFO iroh_ex: 💬 FROM: b8b5230cc3 MSG: MSG:41895
2025-03-26T10:20:37.301754Z INFO iroh_ex: 💬 FROM: b8b5230cc3 MSG: MSG:42119
2025-03-26T10:20:37.301759Z INFO iroh_ex: 💬 FROM: b8b5230cc3 MSG: MSG:41984
2025-03-26T10:20:37.301764Z INFO iroh_ex: 💬 FROM: b8b5230cc3 MSG: MSG:42004
2025-03-26T10:20:37.301770Z INFO iroh_ex: 💬 FROM: b8b5230cc3 MSG: MSG:42003
2025-03-26T10:20:37.301775Z INFO iroh_ex: 💬 FROM: b8b5230cc3 MSG: MSG:41816

This is supposed to continue up to 100k messages. Instead it stops processing any gossip messages and does a whole lot and specially continuously ( until the processes are killed resp. the test times out and kills ) the following:

2025-03-26T10:20:39.169913Z WARN ep{me=f2b8b196c0}:magicsock:poll_recv:disco_in{node=02eac76f11 src=Udp(192.168.64.1:57639)}:handle_pong{m=Pong { tx_id: TransactionId(0x348F8E47B9FDE6CBDE5607AC), ping_observed_addr: Udp(192.168.64.1:59074) } src=Udp(192.168.64.1:57639)}: iroh::magicsock::node_map::node_state: received pong with unknown transaction id tx=348f8e47b9fde6cbde5607ac
2025-03-26T10:20:43.620743Z INFO connect{me="f2b8b196c0" alpn="/iroh-gossip/0" remote="02eac76f11"}:try_send:get_send_addrs{node=02eac76f11}: iroh::magicsock::node_map::node_state: new connection type typ=mixed(udp: 192.168.64.1:57639, relay: https://euw1-1.relay.iroh.network./)
2025-03-26T10:20:43.623089Z INFO ep{me=f2b8b196c0}:magicsock:poll_recv:disco_in{node=02eac76f11 src=Udp(192.168.33.115:57639)}:handle_pong{m=Pong { tx_id: TransactionId(0xE52D38E76EE14AAEF8E0705A), ping_observed_addr: Udp(192.168.33.115:59074) } src=Udp(192.168.33.115:57639)}: iroh::magicsock::node_map::best_addr: selecting new direct path for node addr=192.168.33.115:57639 latency=969.375µs trust_for=6.499986292s
2025-03-26T10:20:43.700950Z INFO ep{me=02eac76f11}:magicsock:poll_recv:disco_in{node=f2b8b196c0 src=Udp(192.168.33.115:59074)}:handle_pong{m=Pong { tx_id: TransactionId(0x9EF0A22FD7B9AA5D512FBDC4), ping_observed_addr: Udp(192.168.33.115:57639) } src=Udp(192.168.33.115:59074)}: iroh::magicsock::node_map::best_addr: selecting new direct path for node addr=192.168.33.115:59074 latency=681.417µs trust_for=6.499995334s
2025-03-26T10:20:44.350378Z INFO router.accept:try_send:get_send_addrs{node=f2b8b196c0}: iroh::magicsock::node_map::node_state: new connection type typ=direct(192.168.33.115:59074)
2025-03-26T10:20:44.622148Z INFO connect{me="f2b8b196c0" alpn="/iroh-gossip/0" remote="02eac76f11"}:try_send:get_send_addrs{node=02eac76f11}: iroh::magicsock::node_map::node_state: new connection type typ=direct(192.168.33.115:57639)
2025-03-26T10:20:56.043344Z INFO ep{me=02eac76f11}:magicsock:actor: iroh::magicsock: home is now relay https://euw1-1.relay.iroh.network./, was Some(RelayUrl("https://use1-1.relay.iroh.network./"))
2025-03-26T10:20:56.240799Z INFO ep{me=3e5d3a6a58}:magicsock:actor: iroh::magicsock: home is now relay https://euw1-1.relay.iroh.network./, was Some(RelayUrl("https://use1-1.relay.iroh.network./"))
2025-03-26T10:21:00.744229Z INFO ep{me=39494967ff}:magicsock:actor: iroh::magicsock: home is now relay https://euw1-1.relay.iroh.network./, was Some(RelayUrl("https://use1-1.relay.iroh.network./"))
2025-03-26T10:21:04.171326Z WARN ep{me=f2b8b196c0}:magicsock:poll_recv:disco_in{node=374f4efb50 src=Udp(192.168.64.1:59871)}:handle_pong{m=Pong { tx_id: TransactionId(0xAFDCF0DA50163CE8EAF44812), ping_observed_addr: Udp(192.168.64.1:59074) } src=Udp(192.168.64.1:59871)}: iroh::magicsock::node_map::node_state: received pong with unknown transaction id tx=afdcf0da50163ce8eaf44812
2025-03-26T10:21:05.940996Z WARN router.accept: iroh::protocol: Handling incoming connecting ended with error: timed out
2025-03-26T10:21:05.974000Z WARN router.accept: iroh::protocol: Handling incoming connecting ended with error: timed out
2025-03-26T10:21:06.014702Z WARN router.accept: iroh::protocol: Handling incoming connecting ended with error: timed out
2025-03-26T10:21:06.018138Z WARN router.accept: iroh::protocol: Handling incoming connecting ended with error: timed out
2025-03-26T10:21:09.170251Z INFO ep{me=f2b8b196c0}:magicsock:poll_recv:disco_in{node=374f4efb50 src=Udp(192.168.33.115:59871)}:handle_pong{m=Pong { tx_id: TransactionId(0xE4AAE18670142194415087CF), ping_observed_addr: Udp(192.168.33.115:59074) } src=Udp(192.168.33.115:59871)}: iroh::magicsock::node_map::best_addr: selecting new direct path for node addr=192.168.33.115:59871 latency=1.494583ms trust_for=6.499998375s
2025-03-26T10:21:09.373418Z INFO ep{me=374f4efb50}:magicsock:poll_recv:disco_in{node=f2b8b196c0 src=Udp(192.168.64.1:59074)}:handle_pong{m=Pong { tx_id: TransactionId(0xA96BD857C6C2E23EE400A870), ping_observed_addr: Udp(192.168.64.1:59871) } src=Udp(192.168.64.1:59074)}: iroh::magicsock::node_map::best_addr: selecting new direct path for node addr=192.168.64.1:59074 latency=1.158125ms trust_for=6.499997541s
2025-03-26T10:21:15.960413Z WARN ep{me=f2b8b196c0}:magicsock:net_report.actor:reportgen.actor:probe: iroh_net_report::reportgen: probe failed: io error: No route to host (os error 65): No route to host (os error 65) probe=IcmpV6 { delay: 222.3358ms, node: RelayNode { url: RelayUrl("https://aps1-1.relay.iroh.network./"), stun_only: false, stun_port: 3478, quic: Some(RelayQuicConfig { port: 7842 }) } }
2025-03-26T10:21:15.960937Z WARN ep{me=f2b8b196c0}:magicsock:net_report.actor:reportgen.actor:probe: i

The most I kept it going in that state was ~10-15min ... iroh, my way of using iroh, ... never recovered from that "discovery loop". The zip file contains megabytes of this type of activity.

@adiibanez
Copy link
Author

adiibanez commented Mar 26, 2025

@flub Not directly related to the issue but somehow in the context of my attempt at understanding constraints of broadcasting messages to a number of nodes.

I completely abandoned sending many messages and I'm currently only using 10nodes and 10 messages sent from a randomly selected node with random delay.

I wait 2 seconds after connecting the nodes before any messages are sent. No matter if I don't wait at all after sending those messages or wait 1sec / 30sec So far I have never seen all nodes receiving 10 messages. I see 6, 7, 8, 9, 10 messages on each of the nodes. But never 10messages on 10 nodes. There is a datastructure at the end of the logs showing that behaviour.

If I send all messages from the very same single node ( eg. the one that "creates" the ticket ) I see 10 messages on 10 nodes indeed. Sometimes I have to wait 10-30 sec; a few times I saw 10 messages without any waiting.

While waiting 1sec - 30sec I also see a lot of the very same discovery messages like in the "unresponsive state" described above. That's why I post this comment and upload the logs.

Is what I'm describing to be expected or is there something else going on in my integration of iroh?
I would kind of expect 10 messages on 10nodes no matter if one nodes sends or many nodes send messages. Also I would expect an "idle" phase without or only very few discovery messages if there isn't anything to do?

iroh_logs_10nodes_10messages.zip

@adiibanez
Copy link
Author

adiibanez commented Apr 1, 2025

I can say that the elixir binding iroh_ex has been stabilized at least to a degree to make it usable. While still a bit rough.

There is a livebook in case that has any relevance https://github.com/adiibanez/iroh_ex it's a bit "hacky" but helped me figure out what was going on instead of staring at logfiles. The relevant stuff resp. UI elements are at the end of the livebook.
Livebook / elixir loads a precompiled iroh rust lib and should run on a bunch of platforms. have only used macos. Also it only supports around 50-100 nodes and a limited numbers of messages ( due to the current approach of visualization ). But with a bit of work elixir ( with or without livebook ) could easily coordinate an arbitrary number of nodes / messages over many machines. In my case I plan to run iroh nodes on both a small clustered backend and also mobile / embedded devices.

Regarding: sending 100k vs. only seeing 40k messages being processed an then stop / lock up ... I was really hammering away in those scenarios where things became "unresponsive". basically unbounded concurrency from elixir side while sending out the messages. In the meantime I'm limiting that to the number of nodes. haven't done any further testing in that regard nor tested more than 100 nodes / 10k messages but things seem to start getting funky / unresponsive with ~ 2x node count concurrency. Maybe some type of backpressure / overload protection is required?

There were a number of issues in my rust / elixir integration too; Some of those issues are solved by getting out of the way of the iroh event receiver loop as much as possible. Resp. not doing anything else than just forwarding messages to elixir via a dedicated tokio mpsc channel. Not quite sure why that event loop is so delicate.
Also keeping all the receivers / event loop handles alive was an issue due to how rustler / elixir interact with rust. That mostly explains why it behaved so weird when only using a few nodes. But I haven't figured out why sending messages from only one node was considerably more reliable than sending from randomly selected nodes.

Thought I had rust concurrency figured out to a midlevel degree from earlier projects; but obviously I didn't.
Resp. streaming data from a few concurrent bluetooth sensors seems to be way easier to handle than a few dozen p2p nodes. was quite an experience, humility mode on; and sorry for the fuzz.

One of the conclusions after this rust concurrency experience was looking for actor like approaches: https://github.com/slawlor/ractor ... modeled after erlang / elixir ;)

https://slawlor.github.io/ractor/assets/rustconf2024_presentation.pdf
Actors and Factories in Rust
Usages for distributed processing overload protection

I'll keep stabilizing the lib. But could use some insider inputs on a few topics:

Could I get some information about what the expected swarm shape looks like? for 50-100 nodes I see mostly 2-5 peers per node, some times a few nodes with only 1 peer and very rarely I see very few nodes not connecting at all.

Is there something like a connection rampup time that has to be respected before starting to stream away with messages?
Are there any relevant lifecycle events that I could use to react to such "negotiation things rolling out"?

currently I send these type of events to elixir:
iroh_node_connected,
iroh_gossip_joined, ( haven't seen any GossipEvent::Joined events yet but also don't know how to setup the event handler before subscribe_and_join resp. if it is relevant at all )
iroh_gossip_neighbor_up,
iroh_gossip_neighbor_down,
iroh_gossip_node_discovered, ( endpoint.discovery_stream(), currently disabled because of event loop )
iroh_gossip_message_received,
iroh_gossip_message_unhandled,

@adiibanez
Copy link
Author

adiibanez commented Apr 2, 2025

Image

green / yellow: very common
orange / grey: rare

all peer counts based on neighbor up / down events

green is >= 3 peers
yellow < 3 peers
orange: 1 peers
grey: no peers

blue curved edges are "in transit" messages. I wait 1-3 sec after connecting the nodes, resp before sending messages ( not sure what the best approach would be, that fits well with what iroh is doing in the background )

@matheus23
Copy link
Member

I haven't figured out why sending messages from only one node was considerably more reliable than sending from randomly selected nodes.

iroh-gossip is based on PlumTree, which constructs a minimal spanning tree when messages are broadcast. Due to the way the protocol works, it's most efficient if the node that's broadcasting stays the same as much as possible (but can still tolerate arbitrary nodes broadcasting).

Could I get some information about what the expected swarm shape looks like? for 50-100 nodes I see mostly 2-5 peers per node, some times a few nodes with only 1 peer and very rarely I see very few nodes not connecting at all.

Is there something like a connection rampup time that has to be respected before starting to stream away with messages?
Are there any relevant lifecycle events that I could use to react to such "negotiation things rolling out"?

Nodes not connecting at all shouldn't happen. We generally aim for a 5 connection limit, as iroh-gossip is supposed to work on mobile devices by default, but you can configure that limit.

Yeah, there will be a rampup time, while HyParView messages propagate around the network to establish the active/passive views.
Generally, there's no way to "know" when the swarm is set up. The assumption is that it's an "open" swarm, i.e. any node can join/leave at any time.
This also means that you never really know which nodes are "listening" in the current swarm. So something like reliable delivery of gossip messages isn't really a thing. You should always assume nodes might miss a message here or there. Perhaps because some node didn't quite "connect" to the swarm yet, or perhaps because the swarm is reshuffling and active views change for some reason.
I'd recommend using iroh-gossip as a sort-of "notification" mechanism to advertise current state. E.g. by adding a sequence number to messages, or e.g. when working with CRDTs, adding the current version vector, helps figure out if there's any updates that need to be fetched.

That said, there's absolutely a good chance that we're losing more messages than we have to due to some bug. E.g. if some nodes end up with 0 connections to the rest of the swarm for longer periods, that'd be weird I think.

@adiibanez
Copy link
Author

adiibanez commented Apr 2, 2025

@matheus23 thanks for the explanations, very helpful. And good to know about the plumtree broadcasting node efficiency.

I'm not aiming at 100% gossip reliability. My usecase is sending sensor data for collaboration / visualization ( most sensors not so high sampling rate, some quite high like ecg / imu. resp. as high as feasible without introducing issues ). But some basic "dynamic" batch size strategies are already in place.

My questions are more about what I need to do to facilitate the best possible scenario for iroh to optimize swarm forming from elixir node "orchestrator" side. Resp. I'm trying to not actively step on it's toes regarding rust / tokio / event loops ... . as I might have been doing earlier.

Any suggestions regarding the described counts of peers? Resp. potential reasons why some nodes only use 1-2 peers while others use 3-5. I observe a "swarm forming" phase with changes but after that it remains static. Should that be a continous process? Maybe I'm still dropping some event loop unintentionally?

Does waiting 1-3 seconds sound reasonable? Or should I rather do some basic calculations based on a certain number of nodes with eg. N peers established. Or not bother at all and just go ahead with streaming data?

The tests I'm doing with quick "swarm" iterations might not be representative for what will be going on with real devices. I'm just not really familiar with the behaviours of "open" swarms to be honest. Or how iroh handles "queued" up messages internally. That's why I'm asking.

The 0-1 peers node scenario is very rare. Just grabbed the opportunity to take a screenshot.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: No status
Development

No branches or pull requests

3 participants