Skip to content
This repository has been archived by the owner on Jul 7, 2023. It is now read-only.

Database update logs are logged to stderr #31

Closed
bfil opened this issue May 6, 2020 · 5 comments
Closed

Database update logs are logged to stderr #31

bfil opened this issue May 6, 2020 · 5 comments
Labels
lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.

Comments

@bfil
Copy link

bfil commented May 6, 2020

The dashboard-metrics-scraper pod writes the following logs to stderr every minute:

{ "level": "info", "msg": "Database updated: 1 nodes, 20 pods" }

It would be better to log these to stdout since they are info level logs.

I've also noticed a red herring in the server code when setting up the logger, where the comment says "Output to stdout instead of the default stderr", but the code does the opposite:

log.SetOutput(os.Stderr)
@AndrewDryga
Copy link

AndrewDryga commented Jun 4, 2020

There are people that believe that progress logs should be also written to stderr and if that's how maintainers decide to keep it - we need a way to filter out INFO level logs entirely, otherwise, our stackdriver logs are filled with errors. For Stackdriver everything in stderr is an error unless your JSON has severity property set to info and there is no way to change that.

Maybe we can add a JSON log format option which would use severity field instead of level?

@AndrewDryga
Copy link

A similar issue exists in external-dns: kubernetes-sigs/external-dns#1616

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 15, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Oct 15, 2020
@maciaszczykm maciaszczykm added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. labels Oct 16, 2020
@pierluigilenoci
Copy link
Contributor

@bfil I did this #45
Somehow it solves the question.

For the log level you can just use the --log-level=warn option.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.
Projects
None yet
Development

No branches or pull requests

6 participants