You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When running Coder on Google Cloud VMs, we observe that Wireguard UDP packets sent between hosts are occasionally reordered. This is interpreted as a congestion event by gVisor, and results in lower TCP performance. The events do not seems strongly correlated with network congestion.
I tested with AWS VMs, and did not observe this reordering.
I have verified that the packets are in order when delivered to the magicsock, and are in order as reported by tcpdump on the outgoing network interface (by checking the Wireguard packet counters).
The packets are reordered when they arrive at the destination VM, as reported by tcpdump. The reordering seems like like it might be related to Generic Receive Offloading (GRO), where multiple UDP packets from the wire are consolidated into a larger UDP packet. The reordering often seems to be correlated with GRO packet boundaries. However, I still observe the reordering even with GRO disabled on the receiving VM.
Kernel wireguard didn't seem to have reordering issues when I set up a link between the VMs and tested throughput, and neither does iperf3 in UDP mode. So, it seems to be something particular about the way that tailscale interacts with the networking APIs in Linux.
When running Coder on Google Cloud VMs, we observe that Wireguard UDP packets sent between hosts are occasionally reordered. This is interpreted as a congestion event by gVisor, and results in lower TCP performance. The events do not seems strongly correlated with network congestion.
I tested with AWS VMs, and did not observe this reordering.
I have verified that the packets are in order when delivered to the
magicsock
, and are in order as reported bytcpdump
on the outgoing network interface (by checking the Wireguard packet counters).The packets are reordered when they arrive at the destination VM, as reported by
tcpdump
. The reordering seems like like it might be related to Generic Receive Offloading (GRO), where multiple UDP packets from the wire are consolidated into a larger UDP packet. The reordering often seems to be correlated with GRO packet boundaries. However, I still observe the reordering even with GRO disabled on the receiving VM.Kernel wireguard didn't seem to have reordering issues when I set up a link between the VMs and tested throughput, and neither does iperf3 in UDP mode. So, it seems to be something particular about the way that tailscale interacts with the networking APIs in Linux.
related to #13042
The text was updated successfully, but these errors were encountered: