-
Notifications
You must be signed in to change notification settings - Fork 612
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OpenAI instrumentation docs fixes #2988
base: main
Are you sure you want to change the base?
Conversation
lmolkova
commented
Nov 9, 2024
•
edited
Loading
edited
- updates broken homepage link to https://pypi.org/project/opentelemetry-instrumentation-openai-v2
- adds docs config
- adds usage samples to readme
instrumentation-genai/opentelemetry-instrumentation-openai-v2/README.rst
Show resolved
Hide resolved
************************* | ||
|
||
When using the instrumentor, all clients will automatically trace OpenAI chat completion operations | ||
and capture prompts and completions as events. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This only happens when OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT
is set as true
, right? Can we highlight the parameter here as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might be worthwhile putting the example (json) format of the content shape that is emitted.
Also might be worth stating that if doing this you may have to configure your pipeline to redact any sensitive info contained in the content.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All examples can be found in semantic conventions - https://github.com/open-telemetry/semantic-conventions/blob/main/docs/gen-ai/gen-ai-events.md and it'd be hard to keep them in sync if we duplicate.
I don't think we should have a redaction example specific to OpenAI instrumentation.
instrumentation-genai/opentelemetry-instrumentation-openai-v2/README.rst
Outdated
Show resolved
Hide resolved
instrumentation-genai/opentelemetry-instrumentation-openai-v2/README.rst
Outdated
Show resolved
Hide resolved
instrumentation-genai/opentelemetry-instrumentation-openai-v2/README.rst
Show resolved
Hide resolved
a7c0f9a
to
b90948a
Compare
5c98884
to
2299149
Compare
2299149
to
26c92b5
Compare
@@ -1,4 +1,4 @@ | |||
OpenTelemetry OpenAI Instrumentation Example | |||
OpenTelemetry OpenAI Zero-Code Instrumentation Example | |||
============================================ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another nit about the zero-code example. opentelemetry-instrument
will use otlp and grpc by default, which differs slightly from the coded example that uses otlp/http.
If you want to modify the zero-code example to use EXACTLY the same components, you will also have to set metrics to None
explicitly:
dotenv run -- opentelemetry-instrument --traces_exporter otlp_proto_http --metrics_exporter None --logs_exporter otlp_proto_http python main.py
@@ -1,4 +1,4 @@ | |||
OpenTelemetry OpenAI Instrumentation Example | |||
OpenTelemetry OpenAI Zero-Code Instrumentation Example | |||
============================================ | |||
|
|||
This is an example of how to instrument OpenAI calls with zero code changes, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Users will also need to set env var OTEL_PYTHON_LOGGING_AUTO_INSTRUMENTATION_ENABLED
to true to enable autoinstrumentation of logging.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some additional comments.