The fully generated .dict
models take 70mb, which bloats binaries substantially. This is compared to 3.4mb for the targz.
It would be nice to support lazily doing this translation, which takes a modest hit to runtime performance in return for dramatically smaller binary, and the ability to only load certain models reducing memory consumption.
It would also be nice to have feature flags to turn off certain models; this is less important if we add lazy loading, though, as it would just allow you to save 1-2mb off the binary.