Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Should logs be spanId centric in Trace Timeline? #215

Open
codefromthecrypt opened this issue Feb 25, 2025 · 1 comment
Open

Should logs be spanId centric in Trace Timeline? #215

codefromthecrypt opened this issue Feb 25, 2025 · 1 comment

Comments

@codefromthecrypt
Copy link

logs can include a trace and span ID. If the span ID is present and a specific span is selected.. should we only show those logs instead of all of them?

Image

python3 -m venv .venv
source .venv/bin/activate
pip install "python-dotenv[cli]"
pip install -r requirements.txt
dotenv run -- python main.py

requirements.txt

pydantic-ai[logfire]~=0.0.25
httpx~=0.28.1

opentelemetry-sdk~=1.30.0
opentelemetry-exporter-otlp-proto-http~=1.30.0
opentelemetry-distro~=0.51b0

main.py

import os

import httpx
import logfire

from pydantic_ai import Agent
from pydantic_ai.models.instrumented import InstrumentedModel
from pydantic_ai.models.openai import OpenAIModel
logfire.configure()

def get_latest_elasticsearch_version() -> str:
    """
    Returns the latest GA version of Elasticsearch in "X.Y.Z" format.
    """
    response = httpx.get("https://artifacts.elastic.co/releases/stack.json")
    releases = response.json()["releases"]
    # Filter out non-release versions (e.g. -rc1), and any " GA" suffix
    versions = [r["version"].replace(" GA", "") for r in releases if "-" not in r["version"]]
    # Avoid lexicographic sort by comparing as a numeric tuple (X, Y, Z)
    return max(versions, key=lambda v: tuple(map(int, v.split("."))))


def main():
    model = InstrumentedModel(OpenAIModel(os.getenv("CHAT_MODEL", "gpt-4o-mini")))
    agent = Agent(model, tools=[get_latest_elasticsearch_version])

    result = agent.run_sync("What's the latest version of Elasticsearch?")
    print(result.data)


if __name__ == "__main__":
    main()

.env

# Override default ENV variables for Ollama
OPENAI_BASE_URL=http://localhost:11434/v1
OPENAI_API_KEY=unused
# Need a larger model as 0.5b hallucinates on tool calls
CHAT_MODEL=qwen2.5:3b

# Disable sending to LogFire as we are sending to OTLP
LOGFIRE_SEND_TO_LOGFIRE=0

# OpenTelemetry configuration
OTEL_SERVICE_NAME=python-logfire-pydantic-ai
OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318
OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf
OTEL_PYTHON_LOGGING_AUTO_INSTRUMENTATION_ENABLED=true
OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT=true
@ymtdzzz
Copy link
Owner

ymtdzzz commented Mar 2, 2025

Good enhancement request. Thanks also for the detailed instructions on how to reproduce it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants