Skip to content

Commit

Permalink
nfa,dfa: call shrink_to_vec in places
Browse files Browse the repository at this point in the history
This commit reduces the *actual* heap memory used by noncontiguous
NFAs, contiguous NFAs and DFAs. Previously, all three would report
memory usage that only included what was actually being used within
each Vec. But each Vec might actually have a lot more heap memory
inside of it in the form of unused capacity. Potentially ~twice as
much. This is because a Vec amortizes its growth.

So when Vecs are used to build NFAs and DFAs, some of them wind up
with potentially a lot of additional capacity that goes unused and
unreported. This commit partially fixes that by calling shrink_to_vec
in a few strategic places. However, there are still some nested Vecs,
notably in the noncontiguous NFA, that remain unshrinked. I tried doing
it, but the perf impact it had on construction was non-trivial.

This is a good example of one of those things where you of course know
that Vecs amortize their growth by reserving potentially extra capacity
that yuo won't end up using, but still remain ignorant of the broader
effects of this strategy in aggregate. Especially at the extremes when
you start trying to build Aho-Corasick automatons with millions of
patterns.

We should think more carefully about using linked lists in the
non-contiguous NFA to reduce memory usage, or perhaps thinking of some
other way to avoid nested Vecs. It turns out that they end up
penalizing us quite a bit.

This commit was inspired by the investigation I did as part of this
issue: G-Research/ahocorasick_rs#62
  • Loading branch information
BurntSushi committed Aug 10, 2023
1 parent e27da64 commit 474393b
Show file tree
Hide file tree
Showing 3 changed files with 19 additions and 1 deletion.
12 changes: 12 additions & 0 deletions src/dfa.rs
Original file line number Diff line number Diff line change
Expand Up @@ -518,6 +518,18 @@ impl Builder {
dfa.byte_classes.alphabet_len(),
dfa.byte_classes.stride(),
);
// The vectors can grow ~twice as big during construction because a
// Vec amortizes growth. But here, let's shrink things back down to
// what we actually need since we're never going to add more to it.
dfa.trans.shrink_to_fit();
dfa.pattern_lens.shrink_to_fit();
dfa.matches.shrink_to_fit();
// TODO: We might also want to shrink each Vec inside of `dfa.matches`,
// or even better, convert it to one contiguous allocation. But I think
// I went with nested allocs for good reason (can't remember), so this
// may be tricky to do. I decided not to shrink them here because it
// might require a fair bit of work to do. It's unclear whether it's
// worth it.
Ok(dfa)
}

Expand Down
5 changes: 5 additions & 0 deletions src/nfa/contiguous.rs
Original file line number Diff line number Diff line change
Expand Up @@ -992,6 +992,11 @@ impl Builder {
nfa.memory_usage(),
nfa.byte_classes.alphabet_len(),
);
// The vectors can grow ~twice as big during construction because a
// Vec amortizes growth. But here, let's shrink things back down to
// what we actually need since we're never going to add more to it.
nfa.repr.shrink_to_fit();
nfa.pattern_lens.shrink_to_fit();
Ok(nfa)
}

Expand Down
3 changes: 2 additions & 1 deletion src/nfa/noncontiguous.rs
Original file line number Diff line number Diff line change
Expand Up @@ -1219,7 +1219,8 @@ impl<'a> Compiler<'a> {
fn calculate_memory_usage(&mut self) {
use core::mem::size_of;

for state in self.nfa.states.iter() {
self.nfa.states.shrink_to_fit();
for state in self.nfa.states.iter_mut() {
self.nfa.memory_usage += size_of::<State>() + state.memory_usage();
}
}
Expand Down

0 comments on commit 474393b

Please sign in to comment.