-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Try to mesure and improve compile times #126
Comments
There is an idea that using lambda instead of named functor class might hurt performance template <size_t N>
inline constexpr auto elements = transform( []( const auto& e ) { return std::get<N>( e ); } ); But it is just idea that should be tested. |
For some reason including
On these configurations even direct usage of So, it's better to add replacements for needed things from |
…age if ureact's algorithm replacement is used
|
Impact of using
Fun fact: difference decreaces to almost zero if we include
So it's better to avoid |
…able were valid before) It doesn't make much sense in allowing member pointers and other esoteric invokable here. Getting rid of `std::invoke()` call here allows not to include `<functional>` and thus improve compilation times (see #126)
It was used for std::reference_wrapper overload before, but it was removed.
It was used for std::reference_wrapper overload before, but it was removed.
There are several groups of use cases for library headers:
By including ureact root headers I mean that there is no forward declarations and ureact is intended to be included in headers. Thus if the fact of merely inclusion of ureact headers heavily impacts compilation times, it is bad. Headers tend to be accidentally recursively included in tons of cpp files, so each millisecond lost on inclusion stage is multiplied by amount of affected cpp files. By declaring ureact classes I mean actual usage by declaring public and private reactive fields of classes, declaring free functions receiving or/and returning reactive values. If each such template instantiation is slow, it affects several cpp too. Such usage should be comparable with using std::vector, std::shared_ptr and similar templated std classes. And by the adaptor usage I mean the fact, that potentially class can require tens of adaptors. And if each receives it's unique lambda expression then it will lead to unique class instantiation. It should be relatively fast, otherwise relatively small usage of adaptors will lead to bloating of compilation times of the cpp where it is done. |
Extra include time: [70.1 ms - 116.4 ms] average - 89.4 ms
Time to instantiate Time to instantiate |
Extra include time: [68.9 ms - 114.2 ms] average - 85.7 ms Time to instantiate Time to instantiate |
Extra include time: [63.08 ms - 107.68 ms] average - 82.63 ms Time to create the first Summary: [109.71 ms - 272.64 ms] average - 176.73 ms I assume this time could be decreased down to almost zero in static library mode, where both context and react_graph as concrete classes can be defined inside the library and stop bloating compilation times. UPDATE |
Description
According to some previous compile time benchmarks, it seems that my decision about reducing about of node classes and doing most of the work of adapters via
ureact::process
andureact::fold
somehow leads to worse compilation times. Need to measure it again, maybe compare with cpp.react somehow and try to improve them.At some moment I tried to move all the tests in the single cpp file and compilation time become unacceptable low. I don't know about cpp.react, but ureact hurts from build timings for some reason.
The text was updated successfully, but these errors were encountered: