You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the original finetune.sh script, it was divided into model_name_or_path and pretrain_mm_mlp_adapter, representing the paths to the language model and the projector, respectively. However, in LanguageBind/Video-LLaVA-7B, the weights of all modules are placed together. In this case, how should the finetune.sh script be modified for fine-tuning?
The text was updated successfully, but these errors were encountered:
In the original finetune.sh script, it was divided into model_name_or_path and pretrain_mm_mlp_adapter, representing the paths to the language model and the projector, respectively. However, in LanguageBind/Video-LLaVA-7B, the weights of all modules are placed together. In this case, how should the finetune.sh script be modified for fine-tuning?
The text was updated successfully, but these errors were encountered: