-
Notifications
You must be signed in to change notification settings - Fork 481
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to use gemma for multi-round conversations #52
Labels
Comments
The following code is copied from the Gemma's kaggle page # Use the model
USER_CHAT_TEMPLATE = "<start_of_turn>user\n{prompt}<end_of_turn>\n"
MODEL_CHAT_TEMPLATE = "<start_of_turn>model\n{prompt}<end_of_turn>\n"
prompt = (
USER_CHAT_TEMPLATE.format(
prompt="What is a good place for travel in the US?"
)
+ MODEL_CHAT_TEMPLATE.format(prompt="California.")
+ USER_CHAT_TEMPLATE.format(prompt="What can I do in California?")
+ "<start_of_turn>model\n"
)
model.generate(
USER_CHAT_TEMPLATE.format(prompt=prompt),
device=device,
output_len=100,
) It tells how to use This code can inspire us to write a program for multiple rounds of dialogue, but I still have some doubts:
|
Hi @ranck626, Does the above response answer your question? |
tilakrayal
added
type:support
Support issues
stat:awaiting response
Status - Awaiting response from author
labels
Apr 24, 2024
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
Thank a lot for your great work!
I deployed gemma-2b locally. I would like to understand how to have multiple rounds of dialog effectively.
I searched the internet and found that I could type in previous conversations to get answers for the next round. But I don't know exactly how it works inside Gemma. I hope to get your pointers or if you can recommend some existing tutorials.
I'm not a native English speaker and may have some grammatical problems. Thank you for your attention.
The text was updated successfully, but these errors were encountered: