-
Notifications
You must be signed in to change notification settings - Fork 76
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG]: OpenAI Agent SDK integration not working properly #1402
Comments
hey @JanWerder - the auto_instrument=True not working could be a symptom of the version of phoenix.otel you are using. Could you try pip install --upgrade arize-phoenix-otel On the second one - let me try to repro for you with @RogerHYang - we did set the context within the instrumentation itself but we didn't really look into the wrapping using tracer.start_span maybe. |
Hey there, @JanWerder! I'm here to help you out with any bugs, questions, or contributions you might have. Just give me a shout if you need anything! The error you're encountering is due to the To ensure that the agent logs to your local instance instead of the OpenAI endpoint, make sure you correctly set the Here's how you can modify your script: from phoenix.otel import register
tracer_provider = register(
project_name="agents",
endpoint="http://localhost:6006/v1/traces",
batch=True
) If you want to ensure that the OpenAI requests are traced correctly, you can use the To continue talking to Dosu, mention @dosu. Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Bug Report | Other |
@RogerHYang thanks for taking a look. So I've upgraded my arize-phoenix-otel to 0.9.0 and have been using the following coding:
With that the information gets logged to phoenix, but for me my token count is zero and the steps do not contain any information. |
I'm working on a fix now. Thank you for bringing this to our attention. In the meantime, you can work around it by commenting the following line of code. set_default_openai_api("chat_completions") Alternatively, you can enable OpenAI Instrumentor simultaneously. from openinference.instrumentation.openai import OpenAIInstrumentor
OpenAIInstrumentor().instrument(tracer_provider=trace_provider) |
We released a fix in openinference-instrumentation-openai-agents>=0.1.4. Please give it a try and let us know if you need help with anything else. Thanks! |
Where do you use Phoenix
Local (self-hosted)
What version of Phoenix are you using?
7.8.1
What operating system are you seeing the problem on?
Windows
What version of Python are you running Phoenix with?
3.12.9
What version of Python or Node are you using instrumentation with?
What instrumentation are you using?
Python
arize-phoenix-otel == 0.6.1
What happened?
Following this tutorial setting up Phoenix with a local installation doesn't work.
Leads to the following error:
If you remove the auto_instrument, the normal OpenAI endpoint is still called instead of the local endpoint.
What did you expect to happen?
Instead I would have expected that the agent would log to my local instance.
How can we reproduce the bug?
Use the following script and fill in your OpenAI details:
Additional information
I've worked up the following code, which works, in the sense that it logs to my local instance and doesn't send the information to OpenAI, but every span is on it's and the request are not grouped into traces, which is not ideal.
The text was updated successfully, but these errors were encountered: