We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hi there,
I want to log my model's accuracy after each epoch and its final accuracy at the end but I cannot find a simple way of doing this.
I am following this tutorial.
My code is as follows:
import wandb wandb.login()
!deepspeed LLaVA/llava/train/train_mem.py --lora_enable True --lora_r 128 --lora_alpha 256 --mm_projector_lr 2e-5 --deepspeed LLaVA/scripts/zero3.json --model_name_or_path liuhaotian/llava-v1.5-13b --version v1 --data_path ./dataset/train/dataset.json --image_folder ./dataset/images --vision_tower openai/clip-vit-large-patch14-336 --mm_projector_type mlp2x_gelu --mm_vision_select_layer -2 --mm_use_im_start_end False --mm_use_im_patch_token False --image_aspect_ratio pad --group_by_modality_length True --bf16 True --output_dir ./checkpoints/llava-v1.5-13b-task-lora --num_train_epochs 10 --per_device_train_batch_size 16 --per_device_eval_batch_size 4 --gradient_accumulation_steps 1 --evaluation_strategy "no" --save_strategy "steps" --save_steps 50000 --save_total_limit 1 --learning_rate 2e-4 --weight_decay 0. --warmup_ratio 0.03 --lr_scheduler_type "cosine" --logging_steps 1 --tf32 True --model_max_length 2048 --gradient_checkpointing True --dataloader_num_workers 4 --lazy_preprocess True --report_to wandb
I have already asked wandb and deepspeed about this and they were unable to help and advised me to create an issue here.
Any help or advice would be appreciated.
The text was updated successfully, but these errors were encountered:
Any updates @haotian-liu ?
Sorry, something went wrong.
No branches or pull requests
Question
Hi there,
I want to log my model's accuracy after each epoch and its final accuracy at the end but I cannot find a simple way of doing this.
I am following this tutorial.
My code is as follows:
import wandb
wandb.login()
!deepspeed LLaVA/llava/train/train_mem.py
--lora_enable True --lora_r 128 --lora_alpha 256 --mm_projector_lr 2e-5
--deepspeed LLaVA/scripts/zero3.json
--model_name_or_path liuhaotian/llava-v1.5-13b
--version v1
--data_path ./dataset/train/dataset.json
--image_folder ./dataset/images
--vision_tower openai/clip-vit-large-patch14-336
--mm_projector_type mlp2x_gelu
--mm_vision_select_layer -2
--mm_use_im_start_end False
--mm_use_im_patch_token False
--image_aspect_ratio pad
--group_by_modality_length True
--bf16 True
--output_dir ./checkpoints/llava-v1.5-13b-task-lora
--num_train_epochs 10
--per_device_train_batch_size 16
--per_device_eval_batch_size 4
--gradient_accumulation_steps 1
--evaluation_strategy "no"
--save_strategy "steps"
--save_steps 50000
--save_total_limit 1
--learning_rate 2e-4
--weight_decay 0.
--warmup_ratio 0.03
--lr_scheduler_type "cosine"
--logging_steps 1
--tf32 True
--model_max_length 2048
--gradient_checkpointing True
--dataloader_num_workers 4
--lazy_preprocess True
--report_to wandb
I have already asked wandb and deepspeed about this and they were unable to help and advised me to create an issue here.
Any help or advice would be appreciated.
The text was updated successfully, but these errors were encountered: