Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Provide a callback whenever retry is triggered #1190

Open
1 task done
ShuaiShao93 opened this issue Feb 24, 2024 · 16 comments
Open
1 task done

Provide a callback whenever retry is triggered #1190

ShuaiShao93 opened this issue Feb 24, 2024 · 16 comments
Assignees
Labels
enhancement New feature or request

Comments

@ShuaiShao93
Copy link

Confirm this is a feature request for the Python library and not the underlying OpenAI API.

  • This is a feature request for the Python library

Describe the feature or improvement you're requesting

Is it possible to provide a callback whenever a retry is triggered internally, so that we can know when and how the requests failed?

Additional context

We want to give our users some insights if some OpenAI requests fail with rate limit error

@rattrayalex rattrayalex added the enhancement New feature or request label Feb 24, 2024
@rattrayalex
Copy link
Collaborator

Thanks for the suggestion – I don't think we have a great way to do this right now, but it seems like a reasonable idea. I don't expect to get to it soon, but will keep the issue open.

@ShuaiShao93
Copy link
Author

Can we just pass in a callback like this

def error_callback(exception: Exception):
    # custom logic

completion = await openai_client.chat.completions.create(
    model=model, ..., error_callback=error_callback
)

And here do

except Exception as err:
    error_callback(err)

@rattrayalex
Copy link
Collaborator

I doubt that's how we'd design it. You could use httpx event_hooks: https://www.python-httpx.org/advanced/event-hooks/

@ShuaiShao93
Copy link
Author

I doubt that's how we'd design it. You could use httpx event_hooks: https://www.python-httpx.org/advanced/event-hooks/

Ah I didn't know we can use custom httpx Client. This works for us, thanks!

@rattrayalex
Copy link
Collaborator

Great! It'd be appreciated if you could share the (rough) code snippet you and up using, so others can benefit

@ShuaiShao93
Copy link
Author

ShuaiShao93 commented Feb 26, 2024

Great! It'd be appreciated if you could share the (rough) code snippet you and up using, so others can benefit

Sure, this block works for me

from openai import OpenAI, timeout
from openai._base_client import SyncHttpxClientWrapper

def _print_response_status_code(response):
    print("Response status code", response.status_code)

http_client = SyncHttpxClientWrapper(
  base_url="https://api.openai.com/v1",
  timeout=timeout,
  follow_redirects=True,
  event_hooks={"response": [_print_response_status_code]},
)
client = OpenAI(http_client=http_client)

response = client.chat.completions.create(
  model="gpt-3.5-turbo",
  messages=[
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Who won the world series in 2020?"},
    {"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."},
    {"role": "user", "content": "Where was it played?"}
  ]
)

It's still a bit annoying to hack to import SyncHttpxClientWrapper, and manually set all the defaults(like base_url, timeout, follow_redirects, etc) for its constructor tho. If it's possible to expose SyncHttpxClientWrapper with all default arguments, it would make life much easier

@setu4993
Copy link

@rattrayalex : +1 on this request... Would you be open to a contribution to pass along event_hooks from the OpenAI client initialization to the constructor for the client?

We're really to avoid having to redefine the http_client and leverage what OpenAI provides out of the box.

@rattrayalex
Copy link
Collaborator

It's still a bit annoying to hack to import SyncHttpxClientWrapper, and manually set all the defaults(like base_url, timeout, follow_redirects, etc) for its constructor tho. If it's possible to expose SyncHttpxClientWrapper with all default arguments, it would make life much easier

That's a good point – it'd be nice for us to expose the default args at least, if not a class which has those defaults. I don't think you should really need to import SyncHttpxClientWrapper instead of just httpx.Client though.

Would you be open to a contribution to pass along event_hooks from the OpenAI client initialization to the constructor for the client?

I don't think we'd want it to be event_hooks but something closer to on_request=. We have to gavel on the design there internally first, so I don't think a contribution at this time would be a wise investment. Thank you though!

cc @RobertCraigie on the above – we should get this slated.

@RobertCraigie
Copy link
Collaborator

Yes you definitely should not have to use SyncHttpxClientWrapper, thats an internal only thing that we use for reference counting.

It might be a good idea to provide something like this though, which would use our default options, thoughts?

import openai

client = openai.OpenAI(
  http_client=openai.DefaultHttpxClient(...),
)

@ShuaiShao93
Copy link
Author

@rattrayalex @RobertCraigie DefaultHttpxClient looks good to me! Can you help expose that for us? Really appreciate it!

@setu4993
Copy link

setu4993 commented Mar 3, 2024

I really like the idea of having a DefaultHttpxClient that users can use without needing to redo setting the default properties, with other configurable options that are currently available on httpx's client.

I'm having a bit of a hard time connecting the dots to go from on_request handlers to something akin to event_hooks especially because what I'm trying to get at is understanding not just when a request occurred, but when a 4xx response / retry occurred. So, that necessitates skipping maybe the first request and then catching the other ones.

I'm probably just missing some connection which is likely obvious to the maintainers here, so that's totally fine, just saying out loud that it'd be nice to get some docs on the usage here when this gets worked on.

Appreciate the quick feedback, iterations on design and openness to implementation here a bunch, OpenAI team!

@rattrayalex
Copy link
Collaborator

You're correct it isn't trivial, which is why we will need to take our time designing it – and on_request and similar niceities may not land right away.

However, DefaultHttpxClient is more straightforward and should be able to happen sooner.

We plan to do both!

@setu4993
Copy link

setu4993 commented Mar 3, 2024

Great to hear that there's potential for both of those approaches to be supported natively. Thanks for adding this to the roadmap and proactively engaging here!

@ShuaiShao93
Copy link
Author

@rattrayalex @RobertCraigie Thanks! Is there any ETA for the straightforward DefaultHttpxClient? This is blocking us on an important project, so I really appreciate if it can be fixed asap.

@RobertCraigie
Copy link
Collaborator

OOC how is this blocking you? That class would just be changing the defaults and you can change the defaults yourself as well, e.g. this will be functionally exactly the same as the solution we provide

import openai
import httpx

openai = openai.OpenAI(
  http_client=httpx.Client(
    timeout=openai.DEFAULT_TIMEOUT,
    follow_redirects=True,
    limits=httpx.Limits(max_connections=100, max_keepalive_connections=20), # also currently the httpx default
  )
)

@ShuaiShao93
Copy link
Author

@RobertCraigie Yeah this is not a blocker. We can move forward with this, but please keep us posted if the formal solution is out. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants