-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
An error when using LoRA for s3prl frontend. #5721
Labels
Bug
bug should be fixed
Comments
Sorry for the late reply. So currently, please avoid using lora with s3prl frontend. An alternatives for lora could be the houlsby adapter. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Describe the bug
When applying LoRA fine-tuning on the s3prl frontend (e.g. hubert_base), the output has no gradients. More specifically, I simply used the last layer, instead of using the multi-layer weighted sum.
Basic environments:
3.9.18 (main, Sep 11 2023, 13:41:44) [GCC 11.2.0]
espnet 202402
pytorch 2.1.0
ec9760b22654dc04eeecd37e2659ebda0325a786
Sat Mar 2 01:25:58 2024 -0500
Environments from
torch.utils.collect_env
:e.g.,
Task information:
To Reproduce
Steps to reproduce the behavior:
cd egs2/librispeech_100/asr1
Error logs
The text was updated successfully, but these errors were encountered: