Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Doesn't work with new ggml versions #199

Open
tarunchand opened this issue Apr 4, 2023 · 2 comments
Open

Doesn't work with new ggml versions #199

tarunchand opened this issue Apr 4, 2023 · 2 comments

Comments

@tarunchand
Copy link

Failed to load new ggml versions - bad magic

@Yiyi-philosophy
Copy link

Yes, I meet a same question in my machine:

llama_model_load: loading model from '../ggml-model-q4_0.bin' - please wait ...
llama_model_load: invalid model file '../ggml-model-q4_0.bin' (bad magic)
main: failed to load model from '../ggml-model-q4_0.bin'

May be we need a script to transfer to the old ggml version.
By the way, llama.cpp is changing so fast ... Why so many developers get involve in it ?

@HanClinto
Copy link

The root version of llama.cpp now includes options for loading Alpaca models running in an interactive manner.
https://github.com/ggerganov/llama.cpp#instruction-mode-with-alpaca

I'm not sure that it's worth maintaining this as a separate fork anymore. Are there any improvements from alpaca.cpp that we should port back into the main trunk of llama.cpp?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants