Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Worse performance after parsing 2 large values #364

Open
nihohit opened this issue Apr 4, 2024 · 9 comments
Open

Worse performance after parsing 2 large values #364

nihohit opened this issue Apr 4, 2024 · 9 comments

Comments

@nihohit
Copy link

nihohit commented Apr 4, 2024

See this discussion in the redis-rs repo: redis-rs/redis-rs#1128 (comment)

The redis-rs parser receives a large value (the non-TLS example uses 500,000,000 bytes) from a TCP stream, and parses it in a set time. After this first call, consecutive calls, with exactly the same value, will take ~twice as much time to parse the value. flamegraph analysis put the blame on combine::stream::buf_reader::extend_buf_sync.
This behavior isn't reproduced when instead of a TCP stream I use a Vec<u8> or Bytes objects with the value.
I have very little experience with using combine, and I know nothing about its internals. I'd appreciate any help with understanding whether the issue is with our usage of combine, or an actual issue with the library.

Here is the redis-rs parser code.

@Marwes
Copy link
Owner

Marwes commented Apr 4, 2024

I think there may be an accidentally quadratic behavior here, where when you run it using an actual IO device the data comes in chunks and every time we read a new chunk we go through extend_buf_sync we end up initializing the buffer againbefore passing it to the std::io::Read.

Hypothesis

When reading the first value we end up incrementally increasing the size of the buffer, extending it by8 * 1024 bytes every time it needs to grow when we read a new chunk of data. But for any values following that the buffer is already huge! While we don't need to grow the buffer again, we still end up initializing the entire uninitialized part of the buffer (it is uninitialized as far as the code is aware, but it is actually initialized from the previous value). So when we again read the data coming in chunks we get quadratic behaviour from repeatedly initializing the huge buffer.

This doesn't show when reading straight from from a byte buffer because the data is available in full, so as we get to the second value we only do a single call to extend_buf_sync/read and that fills the buffer with the entire value all at once.

Fixes

I can think of three at the top of my head.

  • Cap how much we initialize (and thus are able to read) in each call to extend_buf_sync/read
    • Very easy, but will still have overhead
  • Track how much of the buffer is initialized from previous calls and avoid doing the initialization on this again
    • Ideal, more or less, given the APIs std gives us at this time. But it does involve more unsafe since we must give std::io::Read::read an initialized buffer.
  • Wait for https://doc.rust-lang.org/std/io/trait.Read.html#method.read_buf to stabilize
    • This basically mirrors the API tokio (and consequentally redis-rs and combine's async versions uses). This API allows the initialization step be omitted entirely without unsafe (at this level, at the lower levels there are still unsafe)

I'd probably prefer point 2, but I may need a little before I have time for it. If you want to try something easy, 1 would be easy to patch in to test the hypothesis. If you are able to produce a minimized example I can add to combine's tests or benchmarks that would be appreciated the partial-io crate used in combine can be used to simulate reading data in chunks while still using in memory data.

@nihohit
Copy link
Author

nihohit commented Apr 4, 2024 via email

@Marwes
Copy link
Owner

Marwes commented Apr 4, 2024

I don't know if it helps, but the initial report was about async redis-rs.

Didn't see that, the async version wouldn't call extend_buf_sync so perhaps it is a similar issue, but the posted flamegraphs doesn't apply to it.

@Marwes
Copy link
Owner

Marwes commented Apr 4, 2024

You could try this branch to see if it helps (implements point 1) https://github.com/Marwes/combine/tree/perf_extend_buf_sync

@nihohit
Copy link
Author

nihohit commented Apr 4, 2024

Didn't see that, the async version wouldn't call extend_buf_sync so perhaps it is a similar issue, but the posted flamegraphs doesn't apply to it.

No, you're right - this doesn't happen in the async non-TLS runs, only in async TLS runs. Thanks for correcting me!

edit: or, at least I can't repro it in async non-TLS. here redis-rs/redis-rs#1128 (comment) the original writer claims that he sees the same.

@nihohit
Copy link
Author

nihohit commented Apr 4, 2024

You could try this branch to see if it helps

It helps! Thanks for the quick implementation.
Actually, not only does it help, but it's also faster on the first call, not only on the repeat calls. Amazing work!

@artemrakov
Copy link
Contributor

artemrakov commented Apr 4, 2024

@Marwes Can I create pr with your change (point 1) with additional testing using partial-io or you think it is more appropriate to implement point 2 solution?

@Marwes
Copy link
Owner

Marwes commented Apr 5, 2024

@artemrakov Go ahead

@Marwes
Copy link
Owner

Marwes commented Apr 5, 2024

Actually, not only does it help, but it's also faster on the first call, not only on the repeat calls. Amazing work!

it will likely be a performance regression if the reader could actually fill the entire buffer that were initialized, but that would just be a constant factor so especially for a temporary workaround, that is fine.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants