Skip to content

Llama2 Error while converting model weights to run with Hugging Face #1075

Open
@neerajg5

Description

@neerajg5

Before submitting a bug, please make sure the issue hasn't been already addressed by searching through the FAQs and existing/past issues

Describe the bug

I'm following steps listed here https://ai.meta.com/blog/5-steps-to-getting-started-with-llama-2/ I've been able to complete couple of steps from this. However, while trying to follow "convert the model weights to run with Hugging Face" step, getting the following error.

Command:
pip install protobuf && python3 $TRANSFORM --input_dir ./llama-2-7b-chat --model_size 7B --output_dir ./llama-2-7b-chat-hf --llama_version 2

Output

Traceback (most recent call last):
  File "/home/neeraj/.local/lib/python3.10/site-packages/transformers/models/llama/convert_llama_weights_to_hf.py", line 339, in <module>
    main()
  File "/home/neeraj/.local/lib/python3.10/site-packages/transformers/models/llama/convert_llama_weights_to_hf.py", line 326, in main
    write_model(
  File "/home/neeraj/.local/lib/python3.10/site-packages/transformers/models/llama/convert_llama_weights_to_hf.py", line 94, in write_model
    params = read_json(os.path.join(input_base_path, "params.json"))
  File "/home/neeraj/.local/lib/python3.10/site-packages/transformers/models/llama/convert_llama_weights_to_hf.py", line 75, in read_json
    return json.load(f)
  File "/usr/lib/python3.10/json/__init__.py", line 293, in load
    return loads(fp.read(),
  File "/usr/lib/python3.10/json/__init__.py", line 346, in loads
    return _default_decoder.decode(s)
  File "/usr/lib/python3.10/json/decoder.py", line 337, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
  File "/usr/lib/python3.10/json/decoder.py", line 355, in raw_decode
    raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)

Runtime Environment

  • Model: llama-2-7b-chat
  • Using via huggingface?: no
  • OS: Ubuntu 22.04.3 LTS
  • GPU VRAM:
  • Number of GPUs:
  • GPU Make: Intel Iris Xe Graphics Family

Metadata

Metadata

Assignees

Labels

model-usageissues related to how models are used/loadedneeds-more-informationIssue is not fully clear to be acted upon

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions