-
Notifications
You must be signed in to change notification settings - Fork 594
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LORA on T5 model #169
Comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I am trying to use LORA on a loaded Checkpoint of a CodeT5 model. However when I do, the run time is about the same, and my result is not as good as when I finetune the whole thing. Am I intializing the model properly?
rank=16
lora_alpha=4
lora_dropout=0.05
model = T5ForConditionalGeneration.from_pretrained("Salesforce/codet5-small")
#freeze parameters
for name, param in model.named_parameters():
param.requires_grad = False
lora_config = LoraConfig(inference_mode=False, r=rank, target_modules=['q', 'v'], lora_alpha=lora_alpha, lora_dropout=lora_dropout)
lora_model = LoraModel(model, lora_config, "default")
Thank you
The text was updated successfully, but these errors were encountered: