-
Notifications
You must be signed in to change notification settings - Fork 481
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Output with higher max_length is repetition of base text #19
Comments
Could you please try the instruction-tuned model instead? It should give you better results. |
Thanks, With the instruct tuned model the output is perfect. Btw is there any reason why the gemma_2b_en model produced repetitive output instead ks stopping ?. |
It's kind of expected that the pre-trained models only try to complete text. Maybe one way you could try is to tune the sampling parameters to see if you can get a bit diversity in the output. |
I am just happy to be a part of this chat |
Yeah, Its expected of it to complete the text but still shouldn't repeat its text right? |
I've noticed the 2b model repeating itself as well. Although, I found it does it when the context of my prompt would be hard even for a human to figure out. |
While generating any text with a specified value of max_length, the generated text keeps repeating several times until the output spans the value of max_length. An example of the above is using the following code
As you can observe the sentence keeps repeating to span the max_length while it should ideally stop once it has written the base text.
The code was run on Kaggle with "gemma_2b_en" model
GPU - P100
To recreate the issue you can run the given code.
The text was updated successfully, but these errors were encountered: