Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

maybe better #4

Open
songwei163 opened this issue May 23, 2023 · 4 comments
Open

maybe better #4

songwei163 opened this issue May 23, 2023 · 4 comments

Comments

@songwei163
Copy link

tokio::select! { _ = recv_thread => (), _ = write_thread => (), _ = signal_thread => connection.close(0u32.into(), b"signal HUP"), }
如果将这些不同的处理逻辑拆分到不同的tokio::spawn中可能会更好,读写分离;放到一个spawn中,select中的某个匹配臂阻塞的话可能会导致其他匹配臂无法得到执行。recv或者send 动作可能需要添加time-out。

@oowl
Copy link
Owner

oowl commented May 23, 2023

Good catch, PR welcome.

@JyJyJcr
Copy link
Collaborator

JyJyJcr commented Mar 13, 2024

This won't happen. Since we use tokio in asyncnon-blocking mode, all of IO calls are internally switched in short time, no need to worry about that recv or send thread block the whole process. What we should avoid is synchronous heavy tasks.

@songwei163
Copy link
Author

What I'm saying is that the business side is better off encapsulating its own timeout control than just relying on quinn's timeout disconnection if the network is bad

@JyJyJcr
Copy link
Collaborator

JyJyJcr commented Mar 15, 2024

I understood the purpose of setting own timeout, but we can directly adjust quinn's timeout here, which I think is the better way than adding another timeout.

transport_config.max_idle_timeout(Some(VarInt::from_u32(60_000).into()));

An cmdline option will be suited for the interface, like ping's -t option.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants