Computing crate_hash from metadata encoding instead of HIR (implements #94878) (very draft)#154724
Computing crate_hash from metadata encoding instead of HIR (implements #94878) (very draft)#154724Daniel-B-Smith wants to merge 3 commits intorust-lang:mainfrom
Conversation
This comment has been minimized.
This comment has been minimized.
24c04d5 to
c826667
Compare
This comment has been minimized.
This comment has been minimized.
c826667 to
1fe48e6
Compare
This comment has been minimized.
This comment has been minimized.
43a7704 to
338aba3
Compare
This comment has been minimized.
This comment has been minimized.
0ddcd96 to
d27cca5
Compare
This comment has been minimized.
This comment has been minimized.
1e5f269 to
4171895
Compare
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
|
@bors try @rust-timer queue |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
4171895 to
bb57d62
Compare
This comment has been minimized.
This comment has been minimized.
bb57d62 to
1ad9d87
Compare
This comment has been minimized.
This comment has been minimized.
|
Finished benchmarking commit (92f3e06): comparison URL. Overall result: ❌✅ regressions and improvements - please read:Benchmarking means the PR may be perf-sensitive. It's automatically marked not fit for rolling up. Overriding is possible but disadvised: it risks changing compiler perf. Next, please: If you can, justify the regressions found in this try perf run in writing along with @bors rollup=never Instruction countOur most reliable metric. Used to determine the overall result above. However, even this metric can be noisy.
Max RSS (memory usage)Results (primary 2.3%, secondary 26.7%)A less reliable metric. May be of interest, but not used to determine the overall result above.
CyclesResults (primary 3.3%, secondary 22.4%)A less reliable metric. May be of interest, but not used to determine the overall result above.
Binary sizeResults (primary 0.0%, secondary 0.1%)A less reliable metric. May be of interest, but not used to determine the overall result above.
Bootstrap: 490.004s -> 491.506s (0.31%) |
This comment has been minimized.
This comment has been minimized.
2d0b1c1 to
6e71944
Compare
6e71944 to
469f873
Compare
This comment has been minimized.
This comment has been minimized.
|
@bors try @rust-timer queue |
This comment has been minimized.
This comment has been minimized.
DO NOT SUBMIT!
This comment has been minimized.
This comment has been minimized.
469f873 to
31ed5df
Compare
|
@bors try cancel |
|
❗ There is currently no try build in progress on this PR. |
|
@bors try @rust-timer queue |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
Computing crate_hash from metadata encoding instead of HIR (implements #94878) (very draft)
|
The latest update reverts crate_hash to use the HIR hash in cases where metadata is not otherwise generated. That fixed the large performance regression for compilations that depend heavily on cases where metadata is not generated. The big performance open question right now is whether or not the less invasive metadata hashing strategy is fast enough. Hashing the bytes of each item one at a time has substantial overhead relative to hashing all of the bytes as they are flushed to disk. The latter is a much more invasive change to FileEncoder, so I'm holding off until it's clear that is needed. |
|
The job Click to see the possible cause of the failure (guessed by this bot)For more information how to resolve CI failures of this job, visit this link. |
This comment has been minimized.
This comment has been minimized.
|
Finished benchmarking commit (ec2b0e4): comparison URL. Overall result: ❌✅ regressions and improvements - please read:Benchmarking means the PR may be perf-sensitive. It's automatically marked not fit for rolling up. Overriding is possible but disadvised: it risks changing compiler perf. Next, please: If you can, justify the regressions found in this try perf run in writing along with @bors rollup=never Instruction countOur most reliable metric. Used to determine the overall result above. However, even this metric can be noisy.
Max RSS (memory usage)Results (primary -4.2%, secondary 3.0%)A less reliable metric. May be of interest, but not used to determine the overall result above.
CyclesResults (primary -2.4%, secondary 2.4%)A less reliable metric. May be of interest, but not used to determine the overall result above.
Binary sizeThis perf run didn't have relevant results for this metric. Bootstrap: 487.959s -> 489.335s (0.28%) |
View all comments
Adds a parallel process of hashing along with the metadata encoding. It does add metadata encoding in a few places to ensure that the hash was available. Currently, the metadata encoding always generates both the metadata and the hash even if only one is needed.
Known issue: one test failure due to #137108 (pre-existing repr(simd) projection bug. The new metadata pass trips some MIR validation. This will need to catch the issue earlier.
Local profiling on laptop shows roughly neutral (~0.5%), requesting CI perf run for precise measurement