-
Notifications
You must be signed in to change notification settings - Fork 540
[Bug] LogitsWarper deprecated in transformers? (trying to run Qwen/Qwen2.5-VL-72B-Instruct
)
#3100
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Qwen/Qwen2.5-VL-72B-Instruct
)
It should be ok to remove the class inheritance in
|
Thank you, that does seem to have solved the issue. Though I do get another issue:
|
Sorry, after double check the document, I found that qwen2.5vl has not been supported yet. We will add the support ASAP. |
Will you support qwen2.5-vl in turbomind? Thank you! |
Hi, any progress on this feature? |
Qwen2.5_vl_72b on lmdeploy would be life changing |
@CUHKSZzxy is working on it.
|
This issue is marked as stale because it has been marked as invalid or awaiting response for 7 days without any further response. It will be closed in 5 days if the stale label is not removed or if there is no further response. |
#3194 implements qwen2.5-vl in pytorch engine |
after remove class inheritance, the model seems wired. |
@CUHKSZzxy could you help investigate this isssue?
|
@kingwe-stack Hi, could you please provide more details about the command/code you're using and specify which model you're working with? This information will help me reproduce the issue and assist in resolving it. |
Describe the bug
There is some hype around Qwen/Qwen2.5-VL-72B-Instruct due to benchmarks, so I wanted to test, but it needs latest transformers. They recommend this command in their readme:
So I tried it using this Docker image:
with this command:
But got this error:
I believe it is related to this:
LogitsWarper
andLogitsProcessor
huggingface/transformers#32626The text was updated successfully, but these errors were encountered: