Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding benchmarks? #452

Open
FractalFir opened this issue May 1, 2023 · 0 comments
Open

Adding benchmarks? #452

FractalFir opened this issue May 1, 2023 · 0 comments

Comments

@FractalFir
Copy link

Issue

There currently seems to be no benchmarks, which may make finding potential optimizations harder.

Benefits of having benchmarks

Adding benchmarks for some self-contained components, such as the A* pathfinder, could give a clearer picture of the performance improvements/regressions with changes and better guide any optimization attempts. As an example of those potential benefits, the issue #442 suggests that A* pathfinding implementation could be improved, but there is no way to reliably measure performance gains. Adding benchmarks would make it easy to see the effects of any change.

Drawbacks

Adding benchmarks may require depending on more crates. While those dependencies would be development-only, it is still something to consider. Additionally, while not very hard to maintain, benchmarks would require changing if APIs they test change.

Proposed solution

Adding benchmarks with criterion. It gives a high quality overview of changes in performance (with detailed graphs containing multiple samples and showing any outliers), and is pretty easy to use. In order to test how much effort would be required, I had already written a simple benchmark for A* pathfinding with criterion. The entire benchmark fit in under 90 LOC, and was simply a modification of the relevant test. It seems like a lot of benchmarks can be done similarly, requiring little to no effort. Some changes are obviously still required, but they are mostly minor (e.g. changing random generators to seeded ones for consistency). I could write benchmarks for other parts of the engine, which are easy to test standalone(e.g. the rest of algorithms in fyrox::utilis), if the added dev-dependency on criterion would not be an issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant