Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Infrastructure: Benchmarks #5

Open
OoLunar opened this issue May 5, 2023 · 0 comments
Open

Infrastructure: Benchmarks #5

OoLunar opened this issue May 5, 2023 · 0 comments
Assignees
Labels
documentation Improvements or additions to documentation enhancement New feature or request help wanted Extra attention is needed

Comments

@OoLunar
Copy link
Owner

OoLunar commented May 5, 2023

After a successful build on Github Actions, it is recommended to run Benchmarks and post the results as a comment on the commit. This can help others conveniently determine if their new code negatively affects performance, as comments on commits are sent to Discord via webhook. For benchmark testing, it is recommended to use BenchmarkDotNet, which supports the dotnet test command. Our choice for choosing BenchmarkDotNet should be fairly obvious, however you should checkout their Readme if you're having difficulty understanding why. To organize the code, it is suggested to create a separate benchmarks folder for benchmark tests, and a tests folder for functional tests. Functional testing is not in the scope of this issue and will instead of tackled on a seperate issue or PR.

BenchmarkDotNet should measure the following:

  • A complete HTTP request: Receive, parsing, execution and responding.
  • Database operations.
  • How long GenHTTP takes to parse until our handlers receive. Measuring our dependencies is important when identifying bottlenecks in our code.

The benchmarking should be performed only against Linux, as it is the core "OS" for which the bot is designed. While Windows and Mac support will be offered on a best-effort basis, intentional support for those platforms will not be provided.

@OoLunar OoLunar self-assigned this May 5, 2023
@OoLunar OoLunar added documentation Improvements or additions to documentation enhancement New feature or request help wanted Extra attention is needed labels May 5, 2023
@OoLunar OoLunar changed the title Optimization: Benchmarks Infrastructure: Benchmarks May 5, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation enhancement New feature or request help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

1 participant