New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Metal failure after early March versions of server startup loading the model #6020
Comments
Give #6015 a try |
@groovybits how did you compile and install llama-cpp? I suspect the binary is in I've had a similar error and tracked the problem down to the compiled |
No that didn't fix it alone. yet this did after using your branch...
Then I switched back and with the newest main branch in llama.cpp that seems to be all I needed to do? Odd, now I can run the newest version it seems without the complaint about that file missing. So the missing binary is needing to be in the bin directory and also it is needing the header file in the bin directory? Seems to be that way here, thank you so much for pointing this branch out and having it heal my llama.cpp :D. |
Also note that the main branch in llama.cpp does copy this file for me into bin. Yet nothing copied the header over that it seems to need in the bin directory. |
Needing a header file in /usr/local/bin seems a little odd - is something using the wrong path somewhere ? |
This issue was closed because it has been inactive for 14 days since being marked as stale. |
Version: 8030da7
Running on Mac M2 Ultra Studio with 192gig ram and MacOS. Model dolphin-2.7-mixtral-8x7b.Q5_K_M.gguf . This works up to the last week or so to version c2101a2, I haven't tracked down which commit breaks after that one running it on my system like this. It works when I use versions around the first week of March / End of Feb.
If the bug concerns the server, please try to reproduce it first using the server test scenario framework.
The text was updated successfully, but these errors were encountered: