Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: respecting max_datagrams in poll_transmit #2185

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

filipe-cantarelli
Copy link

@filipe-cantarelli filipe-cantarelli commented Mar 23, 2025

Hello,

I've been playing with quinn-proto recently and I noticed that, sometimes, when the maximum segment is large enough quinn will go over the max_datagrams limit of poll_transmit. I believe this PR will fix this issue.

Also, considering that quin-proto is a low level crate, it feels like this code:

let max_datagrams = match self.config.enable_segmentation_offload {
false => 1,
true => max_datagrams.min(MAX_TRANSMIT_SEGMENTS),
};

should be removed and left for the upper level code to decide what max limit they truly want to use.

@Ralith
Copy link
Collaborator

Ralith commented Mar 23, 2025

Also, considering that quin-proto is a low level crate, it feels like this code:

That code exists so that the application layer can easily opt out of GSO without needing to replace the UDP socket abstraction.

@filipe-cantarelli
Copy link
Author

Also, considering that quin-proto is a low level crate, it feels like this code:

That code exists so that the application layer can easily opt out of GSO without needing to replace the UDP socket abstraction.

How about max_datagrams.min(MAX_TRANSMIT_SEGMENTS), is it possible to remove this limitation so the app code request more than 10 segments?

@Ralith
Copy link
Collaborator

Ralith commented Mar 24, 2025

Oh, you were concerned about the upper bound? We could probably get rid of that entirely, yeah.

@filipe-cantarelli filipe-cantarelli force-pushed the fix-max-datagrams-poll-transmit branch from d339e2c to aeffe92 Compare March 24, 2025 10:33
@filipe-cantarelli
Copy link
Author

Oh, you were concerned about the upper bound? We could probably get rid of that entirely, yeah.

Is this something that could be done here in this PR?

@Ralith
Copy link
Collaborator

Ralith commented Mar 24, 2025

Is this something that could be done here in this PR?

Sure, in a separate commit.

@djc
Copy link
Member

djc commented Mar 24, 2025

(Be sure to rebase on main to appease CI.)

@filipe-cantarelli filipe-cantarelli force-pushed the fix-max-datagrams-poll-transmit branch from aeffe92 to 5900957 Compare March 24, 2025 16:26
@filipe-cantarelli
Copy link
Author

filipe-cantarelli commented Mar 24, 2025

Is this something that could be done here in this PR?

Sure, in a separate commit.

Should we keep the current quinn behavior by moving this max cap over there? From what I see, currently linux systems would start dumping 64 segments instead of 10. Maybe set a default of max 10 in there, but also expose a way to set accordingly?

@Ralith
Copy link
Collaborator

Ralith commented Mar 24, 2025

Should we keep the current quinn behavior by moving this max cap over there?

That'd be safer than removing it outright, though bare quinn-proto users will still be affected. IMO we should consider relaxing it entirely, but if you're not interested in doing testing that (especially on Windows where the kernel-enforced limit is a full 512 datagrams) we could leave it to future work.

Maybe set a default of max 10 in there, but also expose a way to set accordingly?

We should avoid offering configuration knobs unless there's clear motivation to support multiple cases, i.e. evidence that no one value is adequate for the vast majority of users.

@filipe-cantarelli
Copy link
Author

Could I then move the 10 limit to quinn in this PR and then open an issue to follow up relaxing it? I could maybe work on that later.

As far as where to cap it, maybe here is a good place:
https://github.com/quinn-rs/quinn/blob/main/quinn/src/connection.rs#L984

@Ralith
Copy link
Collaborator

Ralith commented Mar 24, 2025

Could I then move the 10 limit to quinn in this PR and then open an issue to follow up relaxing it? I could maybe work on that later.

SGTM, thanks!

As far as where to cap it, maybe here is a good place:

Yep, that's the spot.

…tu is large enough to batch

When a TLP is the first in the batch and initial mtu is large enough to allow for batching it still should
stay under the max_datagrams limit
@filipe-cantarelli
Copy link
Author

@Ralith Follow up issue created #2189

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants