Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sentry sdk sending noisy logs #2982

Closed
udk2195 opened this issue Apr 16, 2024 · 10 comments
Closed

Sentry sdk sending noisy logs #2982

udk2195 opened this issue Apr 16, 2024 · 10 comments
Labels
Type: Bug Something isn't working

Comments

@udk2195
Copy link

udk2195 commented Apr 16, 2024

How do you use Sentry?

Sentry Saas (sentry.io)

Version

1.14.0

Steps to Reproduce

i am using splunk as my logging platform and sentry as our error reporting platform

while instantiation sentry-sdk emits a lot of noisy logs such as:-
File "/opt/service/.local/lib/python3.9/site-packages/sentry_sdk/integrations/logging.py", line 96, in sentry_patched_callhandlers

File "/opt/service/.local/lib/python3.9/site-packages/sentry_sdk/integrations/starlette.py", line 125, in _sentry_send

i want to disable sentry to send such logs, how can i disable it ?

Below are the things i have tried and failed with:-

  1. tried setting default_integrations=False
  2. used ignore_logger as mentioned in sentry documentation

Expected Result

there shouldnt be any non informatory sentry logs present

Actual Result

current logs being emitted:-

File "/opt/service/.local/lib/python3.9/site-packages/sentry_sdk/integrations/logging.py", line 96, in sentry_patched_callhandlers

File "/opt/service/.local/lib/python3.9/site-packages/sentry_sdk/integrations/starlette.py", line 125, in _sentry_send

@sentrivana
Copy link
Contributor

Hey @udk2195! 👋🏻 Thanks for writing in.

That looks like error logs, you shouldn't see any of those if the SDK is working ok, so it's likely the SDK is experiencing some issues which need fixing.

Can you please:

  • provide the whole stacktrace from the log output if available so that we can see what exception is thrown and from where
  • tell us more about your setup (what kind of app this is, your pip freeze would also be helpful)
  • ideally provide example code that triggers the logs

When do the logs appear? On every request? Only specific requests? Or is there a different trigger entirely?

@udk2195
Copy link
Author

udk2195 commented Apr 16, 2024

Screenshot 2024-04-16 at 1 40 23 PM - screenshot of the kind of logs i'm getting @sentrivana

this is how i'm initializing sentry

if config.micros_env != "local":
    sentry_sdk.init(
        dsn="https://[email protected]/4506850927837184",

        # Set traces_sample_rate to 1.0 to capture 100%
        # of transactions for performance monitoring.
        # We recommend adjusting this value in production.
        traces_sample_rate=1.0,
        environment=config.micros_env
    )

this is how my requirements.txt look like

fastapi==0.109.1
uvicorn[standard]==0.22.0
gunicorn==20.1.0
pydantic==1.10.13
colorlog==6.7.0
datadog==0.44.0
python-json-logger>=2.0.1
boto3>=1.14.12
aioresponses==0.7.6
orjson==3.9.*
starlette==0.35.0
pympler==1.0.1
scikit-learn==1.0.2
tensorflow>=2.13.*
tensorflow_hub==0.13.0
optimum[onnxruntime]==1.17.1
onnx==1.15.0
onnxruntime==1.17.1
numpy==1.23
sentence-transformers==2.4.0
sentry-sdk==1.14.0
torch==2.0.*
Werkzeug==3.0.1

@udk2195
Copy link
Author

udk2195 commented Apr 16, 2024

@sentrivana i also tried fixing it by upgrading to the latest sentry version 1.45.0 - still the issue persists

@sentrivana
Copy link
Contributor

Thanks @udk2195 -- that was my next question, whether upgrading solves the issue.

I don't have much to go by unless we can either get the full stack trace or there's a way for me to reproduce this. All I can tell from the two log lines you're getting is that there's an exception passing through either the logging integration or the Starlette/FastAPI integration (or both). It could also be an exception in your app and you're seeing some part of the stacktrace that includes SDK code.

Is there any way you can get the full stack trace rather than the truncated one? Are you seeing any errors in Sentry? When do you see these logs being emitted (on startup / while the app is running/etc.)?

@udk2195
Copy link
Author

udk2195 commented Apr 16, 2024

@sentrivana unfortunately, that's all the information i see on splunk - i dont have any other stack trace to share with you - is it possible to completely disable logs being emitted from sentry sdk but at the same time we do report errors to sentry server ?

@sentrivana
Copy link
Contributor

Gotcha @udk2195, the problem is though that if you're seeing those logs, chances are things are not working as expected so there might be a blind spot in what gets reported by Sentry. The best course of action would be to fix the underlying issue. The error causing this might already be in your Sentry -- anything there that looks like it could be related? It'd probably have more info attached to it that might help us get to the root of the issue.

If you really want to silence the logs altogether, there is probably no nice solution for this. It looks like regular stderr exception tracebacks, so you'd need to somehow tell Python to not log exceptions to stderr, or set sys.stderr to None, or something similar. But I don't know if there's a way to do that only for the SDK so that it doesn't affect the rest of your app.

@rusmux
Copy link

rusmux commented Jun 18, 2024

Sentry logging integration patches handlers and changes the source of the caller.

Without Sentry (or if I set default_integrations=False):

2024-06-18 11:51:00.989 | INFO     | uvicorn.server:serve:74 - Started server process [66277]
2024-06-18 11:51:00.990 | INFO     | uvicorn.lifespan.on:startup:48 - Waiting for application startup.
2024-06-18 11:51:00.992 | INFO     | uvicorn.lifespan.on:startup:62 - Application startup complete.

With Sentry (and therefore with logging integration):

2024-06-18 11:52:42.503 | INFO     | sentry_sdk.integrations.logging:sentry_patched_callhandlers:97 - Started server process
[66522]
2024-06-18 11:52:42.506 | INFO     | sentry_sdk.integrations.logging:sentry_patched_callhandlers:97 - Waiting for application
startup.
2024-06-18 11:52:42.508 | INFO     | sentry_sdk.integrations.logging:sentry_patched_callhandlers:97 - Application startup
complete.

Disabling only logging integration is not easy for now, see: #3166

@sentrivana
Copy link
Contributor

Hey @rusmux, can you please open a new issue with this?

Will close this issue since we haven't heard back from the OP.

@sentrivana sentrivana closed this as not planned Won't fix, can't repro, duplicate, stale Jun 18, 2024
@jayqi
Copy link

jayqi commented Aug 6, 2024

I ran into the problem described in this issue thread and found a workaround.

To provide more context about my specific situation:

  • I am using Loguru, which logs out the calling frame.
  • I've set up an InterceptHandler per this example in loguru's README, which intercepts standard logging messages and then sinks them with Loguru
  • I am using sentry_sdk with default integrations, which includes the logging integration.

(I think @rusmux is at least using Loguru, judging by the formatting the provided logs, though not sure about the InterceptHandler.)

In my case, any standard logging logs that got intercepted by Loguru were being attributed to sentry_sdk.integrations.logging:sentry_patched_callhandlers:97 rather than whatever the real original calling frame was. (In my case, these are Django server logs and should have been django.utils.log:log_response:241 and django.core.servers.basehttp:log_message:212.

The workaround is that I added a condition to the logic in InterceptHandler to recognize sentry_sdk.integrations.logging:sentry_patched_callhandlers and continue past it. Specifically, changing this while loop condition:

        while frame and (depth == 0 or frame.f_code.co_filename == logging.__file__):

to:

        while frame and (
            depth == 0
            or frame.f_code.co_filename == logging.__file__
            or frame.f_code.co_filename == sentry_sdk.integrations.logging.__file__
        ):

This fixed the problem for me.

Before:

2024-08-06 05:47:50.227 | WARNING  | sentry_sdk.integrations.logging:sentry_patched_callhandlers:97 - Not Found: /favicon.ico
2024-08-06 05:47:50.228 | WARNING  | sentry_sdk.integrations.logging:sentry_patched_callhandlers:97 - "GET /favicon.ico HTTP/1.1" 404 18934

With modification to InterceptHandler:

2024-08-06 06:01:22.019 | WARNING  | django.utils.log:log_response:241 - Not Found: /favicon.ico
2024-08-06 06:01:22.019 | WARNING  | django.core.servers.basehttp:log_message:212 - "GET /favicon.ico HTTP/1.1" 404 18934

@sentrivana
Copy link
Contributor

Hey @jayqi, thanks for taking the time to investigate and writing down the solution -- the fix makes sense to me, would you like to submit a PR with it?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Type: Bug Something isn't working
Projects
Archived in project
Archived in project
Development

No branches or pull requests

5 participants