-
Notifications
You must be signed in to change notification settings - Fork 499
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sentry sdk sending noisy logs #2982
Comments
Hey @udk2195! 👋🏻 Thanks for writing in. That looks like error logs, you shouldn't see any of those if the SDK is working ok, so it's likely the SDK is experiencing some issues which need fixing. Can you please:
When do the logs appear? On every request? Only specific requests? Or is there a different trigger entirely? |
- screenshot of the kind of logs i'm getting @sentrivana this is how i'm initializing sentry
this is how my requirements.txt look like
|
@sentrivana i also tried fixing it by upgrading to the latest sentry version 1.45.0 - still the issue persists |
Thanks @udk2195 -- that was my next question, whether upgrading solves the issue. I don't have much to go by unless we can either get the full stack trace or there's a way for me to reproduce this. All I can tell from the two log lines you're getting is that there's an exception passing through either the logging integration or the Starlette/FastAPI integration (or both). It could also be an exception in your app and you're seeing some part of the stacktrace that includes SDK code. Is there any way you can get the full stack trace rather than the truncated one? Are you seeing any errors in Sentry? When do you see these logs being emitted (on startup / while the app is running/etc.)? |
@sentrivana unfortunately, that's all the information i see on splunk - i dont have any other stack trace to share with you - is it possible to completely disable logs being emitted from sentry sdk but at the same time we do report errors to sentry server ? |
Gotcha @udk2195, the problem is though that if you're seeing those logs, chances are things are not working as expected so there might be a blind spot in what gets reported by Sentry. The best course of action would be to fix the underlying issue. The error causing this might already be in your Sentry -- anything there that looks like it could be related? It'd probably have more info attached to it that might help us get to the root of the issue. If you really want to silence the logs altogether, there is probably no nice solution for this. It looks like regular stderr exception tracebacks, so you'd need to somehow tell Python to not log exceptions to stderr, or set |
Sentry logging integration patches handlers and changes the source of the caller. Without Sentry (or if I set 2024-06-18 11:51:00.989 | INFO | uvicorn.server:serve:74 - Started server process [66277]
2024-06-18 11:51:00.990 | INFO | uvicorn.lifespan.on:startup:48 - Waiting for application startup.
2024-06-18 11:51:00.992 | INFO | uvicorn.lifespan.on:startup:62 - Application startup complete. With Sentry (and therefore with logging integration): 2024-06-18 11:52:42.503 | INFO | sentry_sdk.integrations.logging:sentry_patched_callhandlers:97 - Started server process
[66522]
2024-06-18 11:52:42.506 | INFO | sentry_sdk.integrations.logging:sentry_patched_callhandlers:97 - Waiting for application
startup.
2024-06-18 11:52:42.508 | INFO | sentry_sdk.integrations.logging:sentry_patched_callhandlers:97 - Application startup
complete. Disabling only logging integration is not easy for now, see: #3166 |
Hey @rusmux, can you please open a new issue with this? Will close this issue since we haven't heard back from the OP. |
I ran into the problem described in this issue thread and found a workaround. To provide more context about my specific situation:
(I think @rusmux is at least using Loguru, judging by the formatting the provided logs, though not sure about the In my case, any standard logging logs that got intercepted by Loguru were being attributed to The workaround is that I added a condition to the logic in while frame and (depth == 0 or frame.f_code.co_filename == logging.__file__): to: while frame and (
depth == 0
or frame.f_code.co_filename == logging.__file__
or frame.f_code.co_filename == sentry_sdk.integrations.logging.__file__
): This fixed the problem for me. Before:
With modification to
|
Hey @jayqi, thanks for taking the time to investigate and writing down the solution -- the fix makes sense to me, would you like to submit a PR with it? |
How do you use Sentry?
Sentry Saas (sentry.io)
Version
1.14.0
Steps to Reproduce
i am using splunk as my logging platform and sentry as our error reporting platform
while instantiation sentry-sdk emits a lot of noisy logs such as:-
File "/opt/service/.local/lib/python3.9/site-packages/sentry_sdk/integrations/logging.py", line 96, in sentry_patched_callhandlers
File "/opt/service/.local/lib/python3.9/site-packages/sentry_sdk/integrations/starlette.py", line 125, in _sentry_send
i want to disable sentry to send such logs, how can i disable it ?
Below are the things i have tried and failed with:-
Expected Result
there shouldnt be any non informatory sentry logs present
Actual Result
current logs being emitted:-
File "/opt/service/.local/lib/python3.9/site-packages/sentry_sdk/integrations/logging.py", line 96, in sentry_patched_callhandlers
File "/opt/service/.local/lib/python3.9/site-packages/sentry_sdk/integrations/starlette.py", line 125, in _sentry_send
The text was updated successfully, but these errors were encountered: