Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

finetuning model with max_length=4096, but in infer got `exceeds the model max_length: 2048' #861

Open
piqiuni opened this issue May 2, 2024 · 1 comment

Comments

@piqiuni
Copy link

piqiuni commented May 2, 2024

Describe the bug
What the bug is, and how to reproduce, better with screenshots(描述bug以及复现过程,最好有截图)

Finetuning model ModelType.qwen_vl_chat with max_length=4096
But in inference with the checkpoint, got exceedsthe model max_length: 2048 error

token len: history:421,  now: 1630

Traceback (most recent call last):
  File "/home/ldl/pi_code/swift/pi_code/infer_qwen_vl.py", line 83, in <module>
    response, _ = inference(model, template, value, history)
  File "/home/ldl/miniconda3/envs/swift/lib/python3.10/site-packages/swift/llm/utils/utils.py", line 748, in inference
    raise AssertionError('Current sentence length exceeds'
AssertionError: Current sentence length exceedsthe model max_length: 2048

Your hardware and system info
Write your system info like CUDA version/system/GPU/torch version here(在这里给出硬件信息和系统信息,如CUDA版本,系统,GPU型号和torch版本等)

Additional context
Add any other context about the problem here(在这里补充其他信息)

@piqiuni
Copy link
Author

piqiuni commented May 2, 2024

Seems like I can simply change the 'model.config.seq_length = 4096', and the output works fine.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant