-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Log messages showing as errors in Google Stackdriver #1616
Comments
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
/remove-lifecycle stale |
With the same problem here. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
Same issue here. Datadog identifies logs as error when they're actually info @AndrewDryga I solved the issue by switching the log format to JSON instead in the helm chart |
@Saeger it doesn't solve the issue for stackdriver, our format was set to JSON for a while already. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
/remove-lifecycle rotten |
@AndrewDryga I'm using the bitnami external DNS chart with the following in my values.yaml file You can see the errors switch to info's on redeploy of the chart. |
@ericnelson looks like it changed somewhere while this issue was open, it works for me too now. Thanks for noticing it! |
I would like to reopen issue #772 because our Stackdriver logs are spammed with errors that are actually
external-dns
logs with level=info. There are a couple ways we can fix that:Provide a configuration option to write log severity in
severity
JSON attributes instead oflevel
. Eg. by setting--log-format=stackdriver_json
.Provide a configuration option to write all debug and info logs to stdout, and only rest to stderr. Or list of severities that would be sent to stdout.
I think the first option is preferred because then we can use severity filters in Stackdriver. Please note that there is no way how we can fix Stackdriver configuration to recognize
info
level because it's Fluentd agent configuration is managed by GKE team :/.The text was updated successfully, but these errors were encountered: