-
Notifications
You must be signed in to change notification settings - Fork 233
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Questions about inference mode #86
Comments
I'm not sure if I understand which part you want to remove. The serving script basically implements the inference methods defined in the LMServer class. If you don't want to use the HTTP server you can easily modify the llama_serve.py to directly call those methods without spinning up a HTTP server. |
Ok, Got it. Thanks for your reply. By the way, How to change the datatype of the whole model? As I said before, After setting the option |
Hi,
I'm trying to use your wonderful framework to do inference only. However, I'm not familiar with serving-related settings in your code. How to remove them? or change a bit of code?
By the way, after dumping the HLO graph, I found that the datatype is still fp32 even though I have changed the datatype option.
The text was updated successfully, but these errors were encountered: