-
Notifications
You must be signed in to change notification settings - Fork 920
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Doesn't work with new ggml versions #199
Comments
Yes, I meet a same question in my machine: llama_model_load: loading model from '../ggml-model-q4_0.bin' - please wait ...
llama_model_load: invalid model file '../ggml-model-q4_0.bin' (bad magic)
main: failed to load model from '../ggml-model-q4_0.bin' May be we need a script to transfer to the old |
The root version of llama.cpp now includes options for loading Alpaca models running in an interactive manner. I'm not sure that it's worth maintaining this as a separate fork anymore. Are there any improvements from alpaca.cpp that we should port back into the main trunk of llama.cpp? |
Failed to load new ggml versions - bad magic
The text was updated successfully, but these errors were encountered: