-
Notifications
You must be signed in to change notification settings - Fork 258
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to serve fine-tuned model with vllm #945
Comments
I've copied a few json files file origin folder to the my fine-tuned model folder and the above issue was resolved. But I'm facing another issue now:
Seems like we need to do some conversion between torchtune generated model format and HF model format? |
I have the same question. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
After training, the output folder only contain files like
meta_model_0.pt
. If I try to use vllm server to serve this model like this:python -m vllm.entrypoints.openai.api_server --model finetuned_model_path --dtype bfloat16 --port 1235 --max-logprobs 1
. An error will show up sayingfinetuned_model_pathdoes not appear to have a file named config.json
The text was updated successfully, but these errors were encountered: