-
Notifications
You must be signed in to change notification settings - Fork 354
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
filters/openpolicyagent: Add a sample benchmark to measure latency percentiles #3442
base: master
Are you sure you want to change the base?
filters/openpolicyagent: Add a sample benchmark to measure latency percentiles #3442
Conversation
mefarazath
commented
Mar 18, 2025
- Add benchmark for latency distribution with decision logging
- Fix a missed benchmark test case for race conditions
…ntiles - Add benchmark for latency distribution with decision logging - Fix a missed benchmark test case for race conditions Signed-off-by: Farasath Ahamed <[email protected]>
@@ -44,6 +44,8 @@ github.com/armon/go-metrics v0.4.1 h1:hR91U9KYmb6bLBYLQjyM+3j+rcd/UhE+G78SFnF8gJ | |||
github.com/armon/go-metrics v0.4.1/go.mod h1:E6amYzXo6aW1tqzoZGT755KkbgrJsSdpwZ+3JqfkOG4= | |||
github.com/aryszka/jobqueue v0.0.3 h1:O5YbgzQCjRomudwnDTY5BrHUNJhvPHQHq7GfGpE+ybs= | |||
github.com/aryszka/jobqueue v0.0.3/go.mod h1:SdxqI6HZ4E1Lss94tey5OfjcAu3bdCDWS1AQzzIN4m4= | |||
github.com/benburkert/pbench v0.0.0-20160623210926-4ec5821845ef h1:+7ZJvJGiV4hUBdjhEDhfGdjBCOmhVi0YQ5n+6g/ei+k= |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@AlexanderYastrebov based on comments in PR, i guess we are not okay with using this dependency.
Is it fine if we copy over the function/s used from library to our codebase?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After reading the code of the library I am not sure it does the right thing, e.g. I don't see how it accounts for warmup benchmark runs. Lets first understand what we want to achieve.
The standard way to compare benchmark results is to use https://pkg.go.dev/golang.org/x/perf/cmd/benchstat
The table then compares the two input files for each benchmark. It shows the median and 95% confidence interval summaries for each benchmark before and after the change, and an A/B comparison under "vs base".
Checking help for benchstat:
$ benchstat
Usage: benchstat [flags] inputs...
benchstat computes statistical summaries and A/B comparisons of Go
benchmarks. It shows benchmark medians in a table with a row for each
benchmark and a column for each input file. If there is more than one
input file, it also shows A/B comparisons between the files. If a
difference is likely to be noise, it shows "~".
For details, see https://pkg.go.dev/golang.org/x/perf/cmd/benchstat.
-alpha α
consider change significant if p < α (default 0.05)
-col projection
split results into columns by distinct values of projection (default ".file")
-confidence level
confidence level for ranges (default 0.95)
-filter query
use only benchmarks matching benchfilter query (default "*")
-format format
print results in format:
text - plain text
csv - comma-separated values (warnings will be written to stderr)
(default "text")
-ignore keys
ignore variations in keys
-row projection
split results into rows by distinct values of projection (default ".fullname")
-table projection
split results into tables by distinct values of projection (default ".config")
it looks like we can change -confidence
if we need other than p95.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also if you want to compare benchmark results for two commits you may use this script golang/go#63233 (comment)
|