You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
ValueError: The checkpoint you are trying to load has model type llava_mistral but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date.
#6781
Open
1 task done
dainini opened this issue
Jan 30, 2025
· 0 comments
llamafactory-cli train examples/train_lora/llavamed1_5_lora_dpo.yaml
[2025-01-30 21:19:56,796] [INFO] [real_accelerator.py:191:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[INFO|2025-01-30 21:20:00] llamafactory.hparams.parser:355 >> Process rank: 0, device: cuda:0, n_gpu: 1, distributed training: False, compute dtype: torch.bfloat16
[INFO|configuration_utils.py:673] 2025-01-30 21:20:00,204 >> loading configuration file /hdd0/dain/models/llava-med-v1.5-mistral-7b/config.json
Traceback (most recent call last):
File "/home/dain/.conda/envs/llama-factory/lib/python3.11/site-packages/transformers/models/auto/configuration_auto.py", line 1023, in from_pretrained
config_class = CONFIG_MAPPING[config_dict["model_type"]]
~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dain/.conda/envs/llama-factory/lib/python3.11/site-packages/transformers/models/auto/configuration_auto.py", line 725, in __getitem__
raise KeyError(key)
KeyError: 'llava_mistral'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/dain/.conda/envs/llama-factory/bin/llamafactory-cli", line 8, in <module>
sys.exit(main())
^^^^^^
File "/home/dain/.conda/envs/llama-factory/lib/python3.11/site-packages/llamafactory/cli.py", line 112, in main
run_exp()
File "/home/dain/.conda/envs/llama-factory/lib/python3.11/site-packages/llamafactory/train/tuner.py", line 56, in run_exp
run_dpo(model_args, data_args, training_args, finetuning_args, callbacks)
File "/home/dain/.conda/envs/llama-factory/lib/python3.11/site-packages/llamafactory/train/dpo/workflow.py", line 43, in run_dpo
tokenizer_module = load_tokenizer(model_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dain/.conda/envs/llama-factory/lib/python3.11/site-packages/llamafactory/model/loader.py", line 69, in load_tokenizer
config = load_config(model_args)
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dain/.conda/envs/llama-factory/lib/python3.11/site-packages/llamafactory/model/loader.py", line 119, in load_config
return AutoConfig.from_pretrained(model_args.model_name_or_path, **init_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dain/.conda/envs/llama-factory/lib/python3.11/site-packages/transformers/models/auto/configuration_auto.py", line 1025, in from_pretrained
raise ValueError(
ValueError: The checkpoint you are trying to load has model type `llava_mistral` but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date.
Others
No response
The text was updated successfully, but these errors were encountered:
Reminder
System Info
llamafactory
version: 0.9.1Reproduction
Others
No response
The text was updated successfully, but these errors were encountered: