-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Issues: intel/ipex-llm
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Docker container crash due to "No supported config format found in default_model_path"
#12954
opened Mar 7, 2025 by
Airren
Please upgrade to Ollama latest version to support the model split between GPU and CPU.
#12950
opened Mar 7, 2025 by
baoduy
RuntimeError: "qlinear_forward_xpu" not implemented for 'Byte'
#12938
opened Mar 5, 2025 by
tripzero
ollama-0.5.4-ipex-llm A770 16G Deepseek-R1:14b Deepseek-R1:32b 配置问题
#12897
opened Feb 25, 2025 by
XL-Qing
Support for Transformers 4.48+ to Address Security Vulnerabilities
#12889
opened Feb 24, 2025 by
hkarray
Attempting to run vLLM on CPU results in an error almost immediately.
user issue
#12873
opened Feb 23, 2025 by
HumerousGorgon
llama.cpp server UR_RESULT_ERROR_OUT_OF_RESOURCES error
user issue
#12872
opened Feb 22, 2025 by
easyfab
Previous Next
ProTip!
Find all open issues with in progress development work with linked:pr.