-
Notifications
You must be signed in to change notification settings - Fork 180
Description
Summary
When using logfire.suppress_scopes() to filter out Sentry SDK instrumentation spans, it causes an AttributeError in Sentry SDK's propagator because NonRecordingSpan objects lack the .attributes property that Sentry SDK expects.
Environment
- Logfire version: 4.14.2
- Sentry SDK version: 3.0.0a5
- OpenTelemetry SDK: 1.37.0
- Python: 3.12
- Framework: Django with Gunicorn/Uvicorn workers
Reproduction Steps
- Set up a Django application with both Logfire and Sentry SDK instrumentation
- Initialize Logfire with standard instrumentation:
import logfire
logfire.configure(...)
logfire.instrument_django()
logfire.instrument_psycopg()- Initialize Sentry SDK with OpenTelemetry integration:
import sentry_sdk
sentry_sdk.init(
dsn="...",
integrations=[...],
enable_tracing=True,
)- Attempt to suppress Sentry SDK spans from Logfire:
logfire.suppress_scopes("sentry_sdk.tracing")- Make HTTP requests to the Django application
Expected Behavior
Sentry SDK spans should be excluded from Logfire traces while remaining functional for Sentry's backend.
Actual Behavior
Application crashes with:
AttributeError: 'NonRecordingSpan' object has no attribute 'attributes'. Did you mean: 'set_attributes'?Full traceback:
File "/app/.venv/lib/python3.12/site-packages/sentry_sdk/opentelemetry/propagator.py", line 114, in inject
span_url = span.get_attribute(SpanAttributes.HTTP_URL)
File "/app/.venv/lib/python3.12/site-packages/sentry_sdk/tracing.py", line 438, in get_attribute
or not self._otel_span.attributes
AttributeError: 'NonRecordingSpan' object has no attribute 'attributes'
Root Cause
This is an architectural incompatibility between Logfire's suppression mechanism and Sentry SDK's expectations:
- Logfire's
suppress_scopes()converts matching spans intoNonRecordingSpanobjects NonRecordingSpanonly has a.set_attributes()method, not an.attributesproperty- Sentry SDK's propagator (line 438 in
sentry_sdk/tracing.py) directly accessesspan.attributesfor context propagation - This causes an
AttributeErrorwhen Sentry SDK tries to access the suppressed span
Workaround
Instead of using suppress_scopes(), filter spans at the export level by wrapping Logfire's exporter:
from opentelemetry.sdk.trace.export import SpanExporter, SpanExportResult, ReadableSpan
class FilteringSentryScopeExporter(SpanExporter):
"""Wraps an exporter to filter out Sentry SDK spans.
This allows us to exclude Sentry SDK instrumentation spans from Logfire
while keeping them available for Sentry's own backend. Unlike using
logfire.suppress_scopes(), this approach doesn't convert spans to
NonRecordingSpan, which would break Sentry SDK's propagator.
"""
def __init__(self, wrapped_exporter: SpanExporter):
self._wrapped_exporter = wrapped_exporter
def export(self, spans: list[ReadableSpan]) -> SpanExportResult:
# Filter out spans from sentry_sdk.tracing instrumentation scope
filtered_spans = [
span for span in spans
if span.instrumentation_scope and
span.instrumentation_scope.name != "sentry_sdk.tracing"
]
return self._wrapped_exporter.export(filtered_spans)
def shutdown(self) -> None:
self._wrapped_exporter.shutdown()
def force_flush(self, timeout_millis: int = 30000) -> bool:
return self._wrapped_exporter.force_flush(timeout_millis)Then wrap Logfire's exporter after initialization (complex due to nested processor architecture):
from opentelemetry import trace
from opentelemetry.sdk.trace.export import BatchSpanProcessor
# Get the TracerProvider
provider = trace.get_tracer_provider()
real_provider = provider.provider if hasattr(provider, "provider") else provider
# Navigate to the BatchSpanProcessor and wrap its exporter
if hasattr(real_provider, "_active_span_processor"):
processor = real_provider._active_span_processor
if hasattr(processor, "_span_processors"):
for span_processor in processor._span_processors:
# Recursively find BatchSpanProcessors
batch_processors = find_batch_processors(span_processor)
for batch_proc in batch_processors:
if hasattr(batch_proc, "_batch_processor"):
original_exporter = batch_proc._batch_processor._exporter
exporter_type = str(type(original_exporter))
if "logfire" in exporter_type.lower():
batch_proc._batch_processor._exporter = FilteringSentryScopeExporter(
original_exporter
)
breakSuggested Solutions
Option 1: Document the incompatibility
Add documentation warning about using suppress_scopes() with instrumentation that accesses span attributes directly (like Sentry SDK).
Option 2: Provide a built-in filtering mechanism
Add a parameter to Logfire configuration to filter spans at export time rather than suppressing them:
logfire.configure(
...,
filter_scopes=["sentry_sdk.tracing"] # Filter at export, don't suppress
)Option 3: Make suppress_scopes() preserve attributes property
Modify the suppression mechanism to create a span wrapper that maintains the .attributes property for read access while still preventing recording.
Impact
This affects any user trying to use both Logfire and Sentry SDK together who wants to avoid duplicate spans in their Logfire traces. The workaround is complex and requires navigating internal OpenTelemetry structures.
Additional Context
This is not a bug in either Logfire or Sentry SDK - both work correctly within their own architectures. The incompatibility arises from conflicting assumptions about span behavior when using both systems together with scope suppression.