-
Notifications
You must be signed in to change notification settings - Fork 1k
External vLLM instance #319
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
I like this version, it looks very clean. What about calling it |
Updated it to |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good thank you
Let me run the linters and then merge |
Can you just run isort . and black . please? |
Sure will do. |
Added the mock arg and ran black. |
Thank you! |
I haven't made a new release yet, you'll have to install from the git repo instead of |
Gotcha, thanks |
First of all, big fan of your work @jakep-allenai ! The model performs outstandingly I ran into the same confusion as @Zoe-Leaf In my team we use a Do you think that something similar could be implemented here? |
This is what I want to say, too! So as to avoid such misunderstandings |
Same thing as #309 but a bit fewer changes. Happy to close mine if we're going with that one, just want the feature added.
Tested with
Should probably have called it something other than
external-vllm-url
as presumably will work with anything that exposes the OpenAI API (e.g. sglang)