-
Notifications
You must be signed in to change notification settings - Fork 319
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
什么时候支持amd的gpu #274
Comments
同问,或者说能将chatglm.cpp合并到llama.cpp就好了,挺希望能在AMD上跑的。 |
我手上没有 amd gpu 所以没办法测试,有空会逐渐迁移到 llama.cpp 去的 |
llama.cpp 支持 AMD上跑的,llama.cpp我跑AMD ok , 期待chatglm.cpp!!!! |
要不先把 ggml 升级到最新版本? 我看最新版本里面支持 hip 了,现在的版本里面好像代码里面有define,但是 cmake file 都没更新。 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
如何支持amd硬件
The text was updated successfully, but these errors were encountered: