Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG]: OpenAI Agent SDK integration not working properly #1402

Closed
JanWerder opened this issue Mar 21, 2025 · 6 comments · Fixed by #1426
Closed

[BUG]: OpenAI Agent SDK integration not working properly #1402

JanWerder opened this issue Mar 21, 2025 · 6 comments · Fixed by #1426
Assignees
Labels
bug Something isn't working language: python Related to Python integration

Comments

@JanWerder
Copy link

Where do you use Phoenix

Local (self-hosted)

What version of Phoenix are you using?

7.8.1

What operating system are you seeing the problem on?

Windows

What version of Python are you running Phoenix with?

3.12.9

What version of Python or Node are you using instrumentation with?

What instrumentation are you using?

Python

arize-phoenix-otel == 0.6.1

What happened?

Following this tutorial setting up Phoenix with a local installation doesn't work.

custom_client = AsyncOpenAI(
    base_url="~~", api_key="~~")
set_default_openai_client(custom_client)
set_default_openai_api("chat_completions")

from phoenix.otel import register

tracer_provider = register(
  project_name="agents", 
  auto_instrument=True,
  endpoint="http://localhost:6006/v1/traces",
  batch=True
)

Leads to the following error:

  File "C:\Users\janwe\aidev\agent-sdk-demo\main.py", line 26, in <module>
    tracer_provider = register(
                      ^^^^^^^^^
TypeError: register() got an unexpected keyword argument 'auto_instrument'

If you remove the auto_instrument, the normal OpenAI endpoint is still called instead of the local endpoint.

What did you expect to happen?

Instead I would have expected that the agent would log to my local instance.

How can we reproduce the bug?

Use the following script and fill in your OpenAI details:

from agents import Agent, Runner, FunctionTool, RunContextWrapper, function_tool
from openai import AsyncOpenAI
from agents import set_default_openai_client, set_default_openai_api, set_tracing_disabled, set_trace_processors
import json
from typing_extensions import TypedDict, Any
from phoenix.otel import register
from opentelemetry.sdk.trace.export import SimpleSpanProcessor, BatchSpanProcessor
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.trace.export import ConsoleSpanExporter, SimpleSpanProcessor
from openinference.instrumentation.openai import OpenAIInstrumentor
from openinference.semconv.resource import ResourceAttributes
from opentelemetry.sdk.resources import Resource
from opentelemetry import trace
from opentelemetry.trace import Status, StatusCode
from phoenix.otel import register

custom_client = AsyncOpenAI(
    base_url="~~", api_key="~~")
set_default_openai_client(custom_client)
set_default_openai_api("chat_completions")

tracer_provider = register(
  project_name="agents", 
  endpoint="http://localhost:6006/v1/traces",
  batch=True
)

class Location(TypedDict):
    lat: float
    long: float


@function_tool
async def fetch_weather(location: Location) -> str:
    """Fetch the weather for a given location.

    Args:
        location: The location to fetch the weather for.
    """
    # In real life, we'd fetch the weather from a weather API
    return "sunny"


@function_tool
async def fetch_city_info_coordinates(city: str) -> Location:
    """Fetch information about a given city.

    Args:
        city: The city to fetch information for.
    """
    return {"lat": 52.52, "long": 13.405}

agent = Agent(name="Assistant", instructions="You are a helpful assistant", tools=[
              fetch_city_info_coordinates, fetch_weather])

result = Runner.run_sync(agent, "What's the weather in Berlin?")
print(result.final_output)

Additional information

I've worked up the following code, which works, in the sense that it logs to my local instance and doesn't send the information to OpenAI, but every span is on it's and the request are not grouped into traces, which is not ideal.

from agents import Agent, Runner, FunctionTool, RunContextWrapper, function_tool
from openai import AsyncOpenAI
from agents import set_default_openai_client, set_default_openai_api, set_tracing_disabled, set_trace_processors
import json
from typing_extensions import TypedDict, Any
from phoenix.otel import register
from opentelemetry.sdk.trace.export import SimpleSpanProcessor, BatchSpanProcessor
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.trace.export import ConsoleSpanExporter, SimpleSpanProcessor
from openinference.instrumentation.openai import OpenAIInstrumentor
from openinference.semconv.resource import ResourceAttributes
from opentelemetry.sdk.resources import Resource
from opentelemetry import trace
from opentelemetry.trace import Status, StatusCode

custom_client = AsyncOpenAI(
    base_url="~~", api_key="~~")
set_default_openai_client(custom_client)
set_default_openai_api("chat_completions")

endpoint = "http://localhost:6006/v1/traces"
resource = Resource(attributes={
    ResourceAttributes.PROJECT_NAME: 'agents'
})
trace_provider = TracerProvider(resource=resource)
exporter = OTLPSpanExporter(
    endpoint=endpoint
)
trace_provider.add_span_processor(BatchSpanProcessor(exporter))
set_trace_processors([])
OpenAIInstrumentor().instrument(tracer_provider=trace_provider)
trace.set_tracer_provider(trace_provider)
tracer = trace.get_tracer(__name__)


class Location(TypedDict):
    lat: float
    long: float


@function_tool
async def fetch_weather(location: Location) -> str:
    """Fetch the weather for a given location.

    Args:
        location: The location to fetch the weather for.
    """
    # In real life, we'd fetch the weather from a weather API
    return "sunny"


@function_tool
async def fetch_city_info_coordinates(city: str) -> Location:
    """Fetch information about a given city.

    Args:
        city: The city to fetch information for.
    """
    return {"lat": 52.52, "long": 13.405}

agent = Agent(name="Assistant", instructions="You are a helpful assistant", tools=[
              fetch_city_info_coordinates, fetch_weather])

with tracer.start_as_current_span("agent_execution") as parent_span:
    result = Runner.run_sync(agent, "What's the weather in Berlin?")
    print(result.final_output)

@mikeldking
Copy link
Contributor

hey @JanWerder - the auto_instrument=True not working could be a symptom of the version of phoenix.otel you are using. Could you try

pip install --upgrade arize-phoenix-otel

On the second one - let me try to repro for you with @RogerHYang - we did set the context within the instrumentation itself but we didn't really look into the wrapping using tracer.start_span maybe.

@mikeldking mikeldking removed the triage Issues that require triage label Mar 21, 2025
@mikeldking mikeldking transferred this issue from Arize-ai/phoenix Mar 21, 2025
@dosubot dosubot bot added the language: python Related to Python integration label Mar 21, 2025
Copy link

dosubot bot commented Mar 21, 2025

Hey there, @JanWerder! I'm here to help you out with any bugs, questions, or contributions you might have. Just give me a shout if you need anything!

The error you're encountering is due to the register function in the phoenix.otel module not having an auto_instrument parameter. To resolve this, you should remove the auto_instrument argument from your register function call. The valid parameters for the register function are endpoint, project_name, batch, set_global_tracer_provider, headers, protocol, verbose, and auto_instrument is not one of them [1].

To ensure that the agent logs to your local instance instead of the OpenAI endpoint, make sure you correctly set the endpoint parameter to your local instance URL. Additionally, ensure that your local instance is running and accessible at the specified endpoint.

Here's how you can modify your script:

from phoenix.otel import register

tracer_provider = register(
    project_name="agents",
    endpoint="http://localhost:6006/v1/traces",
    batch=True
)

If you want to ensure that the OpenAI requests are traced correctly, you can use the OpenAIInstrumentor to instrument the OpenAI client, as shown in your additional code snippet. This setup will help in grouping requests into traces more effectively.

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Bug Report | Other

@mikeldking mikeldking removed their assignment Mar 21, 2025
@RogerHYang
Copy link
Contributor

RogerHYang commented Mar 21, 2025

Regarding the second question, can you add the following snippets?

from openinference.instrumentation.openai_agents import OpenAIAgentsInstrumentor

OpenAIAgentsInstrumentor().instrument(tracer_provider=trace_provider)

With that i was able to get the screenshot below.

Image

@JanWerder
Copy link
Author

@RogerHYang thanks for taking a look. So I've upgraded my arize-phoenix-otel to 0.9.0 and have been using the following coding:

tracer_provider = register(
    project_name="agents",
    endpoint="http://localhost:6006/v1/traces",
    batch=True
)
set_trace_processors([])
OpenAIAgentsInstrumentor().instrument(tracer_provider=tracer_provider)

With that the information gets logged to phoenix, but for me my token count is zero and the steps do not contain any information.

@RogerHYang
Copy link
Contributor

RogerHYang commented Mar 24, 2025

I'm working on a fix now. Thank you for bringing this to our attention.

In the meantime, you can work around it by commenting the following line of code.

set_default_openai_api("chat_completions")

Alternatively, you can enable OpenAI Instrumentor simultaneously.

from openinference.instrumentation.openai import OpenAIInstrumentor

OpenAIInstrumentor().instrument(tracer_provider=trace_provider)

@RogerHYang
Copy link
Contributor

We released a fix in openinference-instrumentation-openai-agents>=0.1.4.

Please give it a try and let us know if you need help with anything else. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working language: python Related to Python integration
Projects
Status: Done
Status: Done
Development

Successfully merging a pull request may close this issue.

3 participants