-
Notifications
You must be signed in to change notification settings - Fork 741
ValueError: please provide at least one prompt #95
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I think this issue may be helpful: #35 |
I met the same error. Did you deal with it? |
I think you probably need to decrease max_gen_toks |
I tried this, but it didn't work. Also, the error occurs earlier |
I think there seems to be a condition for this to happen. But I do not have a time to inspect for now. Therefore, I gave up solving this problem in direct, but by re-training the model. |
same problem |
I think the error stems from here If the first part of the condition is |
Hello, thank you for sharing amazing work.
I am trying to evaluate my model with
lm_eval --model vllm --model_args pretrained=ckpts/s1-20250310_141828,dtype=bfloat16,tensor_parallel_size=2 --tasks aime25_nofigures --batch_size auto --apply_chat_template --output_path s1.1forcingignore1wait --log_samples --gen_kwargs "max_gen_toks=20000,temperature=0,temperature_thinking=0,max_tokens_thinking=20000,thinking_n_ignore=1,thinking_n_ignore_str=Wait"
However, there is still an error
I've tried changing max_gen_toks and max_tokens_thinking but still does not help.
This is solved when I try with different model,
and It work with no wait.
lm_eval --model vllm --model_args pretrained=ckpts/s1-20250310_141828,dtype=bfloat16,tensor_parallel_size=2 --tasks aime24_figures,aime24_nofigures --batch_size auto --output_path dummy --log_samples --gen_kwargs "max_gen_toks=20000" "
However, budget forcing keep makes error.
I've also tried Inference code in README,
There is no problem with first output print, but there is error when budget forcing.
How can I solve it? Thank you in advance.
The text was updated successfully, but these errors were encountered: