-
-
Notifications
You must be signed in to change notification settings - Fork 86
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Qwen2.5-VL failing #192
Comments
This was fixed The chat template was missing but it's back |
Just download it again |
I have tried deleting the model and downloading it again but I am getting the same issue unfortunately. https://huggingface.co/mlx-community/Qwen2.5-VL-3B-Instruct-3bit |
I have checked and the chat template is included with the downloaded model files, so it seems that something else is causing this error. |
Could you share a reproducible script? Also please the pypi versions. |
Same error here. Running:
Error:
|
This appears to be because the Qwen2.5-VL models come with a The This can be temporarily worked around by copying the chat template from within the JSON file I linked (for example), save it as I'll post a minimal reproducible test in a minute |
Alternately, this seems to work without having to modify downloaded files: Change the function def get_model_and_processors(model_path, adapter_path):
path = get_model_path(model_path)
with open(path / "chat_template.json") as f:
templ = json.loads(f.read())
templ = templ["chat_template"]
config = load_config(model_path, trust_remote_code=True)
model, processor = load(
model_path, adapter_path=adapter_path, lazy=False, trust_remote_code=True, chat_template=templ
)
return model, processor, config we explicitly load and specify the chat template when loading the processor so it picks up the correct one. Just verified working (latest |
That's interesting, because if you install transformers from source it works without any changes v4.49.0.dev0
|
v4.48.1 indeed causes the issue. But if you install transformers from source it should work 👌🏽 |
Yes, indeed it works with transformers from source. Thanks. |
Thanks to that tip this worked for me: uv run --with 'numpy<2' \
--with 'git+https://github.com/huggingface/transformers' \
--with mlx-vlm \
python -m mlx_vlm.generate \
--model mlx-community/Qwen2.5-VL-7B-Instruct-8bit \
--max-tokens 100 \
--temp 0.0 \
--prompt "Describe this image." \
--image path-to-image.png Result on my blog: https://simonwillison.net/2025/Jan/27/qwen25-vl-qwen25-vl-qwen25-vl/#qwen-vl-mlx-vlm |
Most welcome! Great article 🔥🙌🏽 I would love if we had a cookbook like that for mlx-vlm. We already have a few recipes (here) but we definitely need more. |
This model is one of the few that blows up if you give it a large pic:
|
Just have a digging on the error messages and it is because the preprocessor of Qwen2.5-VL is "Qwen2_5_VLProcessor" . However, this class is not included in current release of transformers ver V4.48.2. This class is on the dev branch v4.49.0.dev0, so install from github will be ok. |
same error , installing transformers from github works: |
Issue has been resolved. Closing it for now. |
I am getting this error when using the new Qwen2.5-VL models.
Command:
The text was updated successfully, but these errors were encountered: