Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

(perf) Refactor logger class to encapsulate bunyan instance; add requestLogger stress test #68

Merged
merged 6 commits into from
Jan 24, 2017

Conversation

nason
Copy link
Contributor

@nason nason commented Jan 23, 2017

Summary

This is a fun one. We've been seeing some performance issues with this logger since the scrubFields function was added. Related to #47 #53 #54 #58

This change updates the Logger class to encapsulate a bunyan logger instance rather than return one. This allows us to be much more explicit with the behavior we'd like the logger to maintain.

It also changes the behavior of or data scrubber -- the entire arguments object is passed to hide-secrets now (only if scrubFields is configured), which avoids some bottlenecks in this path.

Benchmarks

we-js-logger x 6,612,870 ops/sec ±0.76% (87 runs sampled)
we-js-logger child x 130,162 ops/sec ±0.82% (89 runs sampled)
we-js-logger (with scrub fields) x 40,964 ops/sec ±1.57% (88 runs sampled)
we-js-logger child (with scrub fields) x 29,098 ops/sec ±1.42% (88 runs sampled)
bunyan x 9,611,734 ops/sec ±1.31% (87 runs sampled)
bunyan child x 876,592 ops/sec ±1.35% (84 runs sampled)

The slowdown from scrub fields is significant, but also expected. I think this is still in the realm of "acceptable" performance since we're not logging anywhere near 30-40k times/sec in our apps.

The child method is noticeably slow. This is a bottleneck that we should look into optimizing (#67), but it should be ok for now.

Stress tests:

A stress test suite was added in this PR that spins up a simple express server with the requestLogger middleware and puts it under load. Here's what that looks like as of this change:

Completed requests: 593
Requests per second: 20
Total time: 30.0048865
Mean latency: 6.6

Percentage of requests served within a certain time:
  50%   6ms
  90%   9ms
  95%   9ms
  99%   16ms
  100%  24ms (longest request)

These tests can be run via npm run test:stress

@coveralls
Copy link

Coverage Status

Coverage increased (+0.5%) to 77.66% when pulling dea0cbe on perf-dealings into e9615ee on master.

Copy link

@mryave mryave left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should all read this. Its pretty awesome. @GLosch @ricardo-quinones

*/
export default function ClientLogger(config = {}) {
export default function ClientLogger(config = {}, logger) {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔥

requestsPerSecond: 20,
};

function printResults(results) {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

😮

const newArgs = args.map((arg) => scrub(arg, this.config));
return original.apply(this, newArgs);
}
// Dynamically hoist + wrap bunyan log instance methods (logger.info, logger.warn, etc)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

might be worth adding to readme that we're only hoisting these logger levels, if we want to access the api you have to logger._logger.addStream...

Copy link
Contributor

@mattjstar mattjstar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is awesome!

@nason nason merged commit 56ad728 into master Jan 24, 2017
@nason nason deleted the perf-dealings branch January 24, 2017 15:50
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants