-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About OOM During Training and Questions Regarding Attn #46
Comments
1,
2, 3, it is just an example script; the actual training script is here: https://github.com/ContextualAI/gritlm/blob/main/scripts/training/train_gritlm_7b.sh |
Thank you very much for your reply. I'm not very familiar with training code, but I will give it a try. Thanks again! |
Does this solution work for you? I am also working on this |
@Muennighoff is loading the model with different gpus available? I try removing torchrun and run with python on multiple gpus. I got device not on the same device error; input tensors on cuda: 0 and model in different cudas. |
I recommend using torchrun for multiple GPUs; I haven't tested it without torchrun on multiple GPUs but it should also work maybe after some small modifications |
@Muennighoff I am currently working on finetuning 7B model on multiple gpus; 7b model doesnt fit in one 80G GPU, so running on parallel GPUS like your demo seems not possible. I added device_map="auto" to use multiple gpus, but I keep getting "tensors on different devices" issue. Do you have any idea about that, or do you have any recommandation on finetuning 7B with n * 80G GPUs? |
The same problem here, I cannot train with |
Thank you for your contribution! I have encountered some issues.
1、Full train
Here is my training script:
Why do I get an OOM (Out of Memory) error? My GPU is 80G A800, and the model is only 7B with a batch size of 1. I believe this configuration should not cause an OOM.
2、LoRA train
To be able to perform training, I used the --lora option. However, after training, the checkpoint saved is 24GB, while the original model was only 14GB:
I would like to know why this is the case. Additionally, I received the following warning when loading:
Some weights of the model checkpoint at /mnt/data1/zmj/embedding_model/gritlm-main/gritlm/output/7-2_lora were not used when initializing MistralForCausalLM: ['model.base_model.model.embed_tokens.weight', 'model.base_model.model.layers.0.input_layernorm.weight', 'model.base_model.model.layers.0.mlp.down_proj.weight', 'model.base_model.model.layers.0.mlp.gate_proj.weight',...]
3、attn
After reading the paper, I understand that you used bidirectional attn for training the embedding task. However, why does the example script you provided for the embedding task use: --attn cccc
I look forward to your response.
The text was updated successfully, but these errors were encountered: