You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When reading references the reftable backend has to:
1. Create a new ref iterator.
2. Seek the iterator to the record we're searching for.
3. Read the record.
We cannot really avoid the last two steps, but re-creating the iterator
every single time we want to read a reference is kind of expensive and a
waste of resources. We couldn't help it in the past though because it
was not possible to reuse iterators. But starting with 5bf96e0
(reftable/generic: move seeking of records into the iterator,
2024-05-13) we have split up the iterator lifecycle such that creating
the iterator and seeking are two different concerns.
Refactor the code such that we cache iterators in the reftable backend.
This cache is invalidated whenever the respective stack is reloaded such
that we know to recreate the iterator in that case. This leads to a
sizeable speedup when creating many refs, which requires a lot of random
reference reads:
Benchmark 1: update-ref: create many refs (refcount = 100000, revision = master)
Time (mean ± σ): 1.793 s ± 0.010 s [User: 0.954 s, System: 0.835 s]
Range (min … max): 1.781 s … 1.811 s 10 runs
Benchmark 2: update-ref: create many refs (refcount = 100000, revision = HEAD)
Time (mean ± σ): 1.680 s ± 0.013 s [User: 0.846 s, System: 0.831 s]
Range (min … max): 1.664 s … 1.702 s 10 runs
Summary
update-ref: create many refs (refcount = 100000, revision = HEAD) ran
1.07 ± 0.01 times faster than update-ref: create many refs (refcount = 100000, revision = master)
While 7% is not a huge win, you have to consider that the benchmark is
_writing_ data, so _reading_ references is only one part of what we do.
Flame graphs show that we spend around 40% of our time reading refs, so
the speedup when reading refs is approximately ~2.5x that. I could not
find better benchmarks where we perform a lot of random ref reads.
You can also see a sizeable impact on memory usage when creating 100k
references. Before this change:
HEAP SUMMARY:
in use at exit: 19,112,538 bytes in 200,170 blocks
total heap usage: 8,400,426 allocs, 8,200,256 frees, 454,367,048 bytes allocated
After this change:
HEAP SUMMARY:
in use at exit: 674,416 bytes in 169 blocks
total heap usage: 7,929,872 allocs, 7,929,703 frees, 281,509,985 bytes allocated
As an additional factor, this refactoring opens up the possibility for
more performance optimizations in how we re-seek iterators. Any change
that allows us to optimize re-seeking by e.g. reusing data structures
would thus also directly speed up random reads.
Signed-off-by: Patrick Steinhardt <[email protected]>
Signed-off-by: Junio C Hamano <[email protected]>
0 commit comments