Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

V2.2 support for Custom-Converted OpenVINO Models? #213

Open
sd1471123 opened this issue Mar 5, 2025 · 9 comments
Open

V2.2 support for Custom-Converted OpenVINO Models? #213

sd1471123 opened this issue Mar 5, 2025 · 9 comments

Comments

@sd1471123
Copy link

sd1471123 commented Mar 5, 2025

In version V2.2, there appears to be support for OpenVINO. Following the instructions in the OpenVINO documentation, I attempted to convert the deepseek-ai/DeepSeek-R1-Distill-Qwen-14B model to OpenVINO format. However, neither using the parameter --weight-format fp16 nor keeping the default int8 configuration worked - the converted models cannot be used in the AI Playground.

To clarify, the issue of 'cannot be used' means that after specifying the model path, these models still do not appear in the model list. Only the default four models are shown.

I am a Chinese user utilizing DeepSeek translation. Please forgive any translation errors that may occur.

@sd1471123 sd1471123 changed the title OpenVINO V2.2 support for Custom-Converted OpenVINO Models? Mar 5, 2025
@sd1471123
Copy link
Author

Image

Image

@sd1471123
Copy link
Author

Image

Image

@bobduffy
Copy link
Contributor

bobduffy commented Mar 5, 2025

Thanks for reporting. I've alerted the OpenVINO team and will get back with their response.
Be sure the converted model is in the openvino folder and refresh the list

Image

@sd1471123
Copy link
Author

sd1471123 commented Mar 6, 2025

Thanks for reporting. I've alerted the OpenVINO team and will get back with their response. Be sure the converted model is in the openvino folder and refresh the list

Image

Does it have to be placed in this specific folder, unlike other models where I can set the file location in the AI Playground backend? In AI Playground, my other models are saved in different folders, and I haven't encountered any issues by changing the folder location in the settings. If it's mandatory to use this folder, I'll try it after work tonight. Thank you.

@sd1471123
Copy link
Author

sd1471123 commented Mar 6, 2025

感谢你的报告。我已经提醒了 OpenVINO 团队,并将回复他们的回复。确保转换后的模型位于 openvino 文件夹中,并刷新列表

Image

When I placed it in this folder, it was recognized! However, after the question was asked and the data was loaded into the video memory, an error message appeared.
Image

Image

@bobduffy
Copy link
Contributor

bobduffy commented Mar 7, 2025

It appears you are out of memory. The model may be too large to fit in available vram. Check openvino documentation on quantizing the model

@sd1471123
Copy link
Author

sd1471123 commented Mar 8, 2025

It appears you are out of memory. The model may be too large to fit in available vram. Check openvino documentation on quantizing the model

I tested the DeepSeek-R1-Distill-Qwen-7B model converted to int8 using OpenVINO again. This time there still seems to be plenty of VRAM remaining, but the same error occurred.

Image

Image

@brownplayer
Copy link

Do you use safetensor to transform directly? I think openvino does not support the direct conversion of safetnsor to openvino format. Do you have files in deepseek-R1 format supported by openvino

@sd1471123
Copy link
Author

sd1471123 commented Mar 9, 2025

Do you use safetensor to transform directly? I think openvino does not support the direct conversion of safetnsor to openvino format. Do you have files in deepseek-R1 format supported by openvino

https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants