-
Notifications
You must be signed in to change notification settings - Fork 13k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Shard AllocMap Lock #136115
base: master
Are you sure you want to change the base?
Shard AllocMap Lock #136115
Conversation
This improves performance on many-seed parallel (-Zthreads=32) miri executions from managing to use ~8 cores to using 27-28 cores. That's pretty reasonable scaling for the simplicity of this solution.
Some changes occurred to the CTFE / Miri interpreter cc @rust-lang/miri, @rust-lang/wg-const-eval |
@bors try @rust-timer queue |
This comment has been minimized.
This comment has been minimized.
Shard AllocMap Lock This improves performance on many-seed parallel (-Zthreads=32) miri executions from managing to use ~8 cores to using 27-28 cores, which is about the same as what I see with the data structure proposed in rust-lang#136105 - I haven't analyzed but I suspect the sharding might actually work out better if we commonly insert "densely" since sharding would split the cache lines and the OnceVec packs locks close together. Of course, we could do something similar with the bitset lock too. Either way, this seems like a very reasonable starting point that solves the problem ~equally well on what I can test locally. r? `@RalfJung`
☀️ Try build successful - checks-actions |
This comment has been minimized.
This comment has been minimized.
Finished benchmarking commit (e402369): comparison URL. Overall result: ❌ regressions - no action neededBenchmarking this pull request likely means that it is perf-sensitive, so we're automatically marking it as not fit for rolling up. While you can manually mark this PR as fit for rollup, we strongly recommend not doing so since this PR may lead to changes in compiler perf. @bors rollup=never Instruction countThis is the most reliable metric that we have; it was used to determine the overall result at the top of this comment. However, even this metric can sometimes exhibit noise.
Max RSS (memory usage)Results (primary -2.0%, secondary 2.1%)This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.
CyclesResults (primary 2.6%)This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.
Binary sizeThis benchmark run did not return any relevant results for this metric. Bootstrap: 772.928s -> 772.728s (-0.03%) |
Perf results look neutral enough that I'm okay moving forward given the non-perf measured gains for parallel executions. @rustbot label: perf-regression-triaged |
@@ -389,35 +391,37 @@ pub const CTFE_ALLOC_SALT: usize = 0; | |||
|
|||
pub(crate) struct AllocMap<'tcx> { | |||
/// Maps `AllocId`s to their corresponding allocations. | |||
alloc_map: FxHashMap<AllocId, GlobalAlloc<'tcx>>, | |||
// Note that this map on rustc workloads seems to be rather dense. In #136105 we considered |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// Note that this map on rustc workloads seems to be rather dense. In #136105 we considered | |
// Note that this map on rustc workloads seems to be rather dense, but | |
// in Miri workloads it is expected to be quite sparse. In #136105 we considered |
assert!( | ||
self.alloc_map | ||
.to_alloc | ||
.lock_shard_by_value(&id) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This locks to_alloc
while dedup
is locked. Seems worth documenting the lock order (in the AllocMap
type, I guess) to avoid deadlocks.
This improves performance on many-seed parallel (-Zthreads=32) miri executions from managing to use ~8 cores to using 27-28 cores, which is about the same as what I see with the data structure proposed in #136105 - I haven't analyzed but I suspect the sharding might actually work out better if we commonly insert "densely" since sharding would split the cache lines and the OnceVec packs locks close together. Of course, we could do something similar with the bitset lock too.
Either way, this seems like a very reasonable starting point that solves the problem ~equally well on what I can test locally.
r? @RalfJung