New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Llama2 Error while converting model weights to run with Hugging Face #1075
Comments
Could you try directly running the command without the " --llama_version 2" as that may not be a valid argument:
|
Thank you for sharing your input. However, there is no difference in the output. I had given this parameter after checking the source code in convert_llama_weights_to_hf.py. The reason of providing this parameter was: "I thought that by giving the model version the script may work and the JSON error may go away" |
I am not able to reproduce the issue on my side. Could you please provide the exact steps you followed and the entire stack trace? Thanks! |
I had followed the exact steps listed here
Getting error in last step. traceback is shared above. Thank you for prompt response. |
Thank you. Could you check if the |
Before submitting a bug, please make sure the issue hasn't been already addressed by searching through the FAQs and existing/past issues
Describe the bug
I'm following steps listed here https://ai.meta.com/blog/5-steps-to-getting-started-with-llama-2/ I've been able to complete couple of steps from this. However, while trying to follow "convert the model weights to run with Hugging Face" step, getting the following error.
Command:
pip install protobuf && python3 $TRANSFORM --input_dir ./llama-2-7b-chat --model_size 7B --output_dir ./llama-2-7b-chat-hf --llama_version 2
Output
Runtime Environment
llama-2-7b-chat
The text was updated successfully, but these errors were encountered: