Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Details on LoRA #34

Open
kalida-one opened this issue Dec 14, 2023 · 0 comments
Open

Details on LoRA #34

kalida-one opened this issue Dec 14, 2023 · 0 comments
Labels
fine-tuning Fine-tuning DISC-LawLLM needs triage The issue needs to be triaged by some maintainer

Comments

@kalida-one
Copy link

我仔细阅读了技术报告,发现没有仓库中提到的lora训练的细节,尤其是学习率这里,为什么全参数量微调学习率是5e-5,远高于LoRA训练的1e-5?我很好奇会带来什么样的表现,希望能够得到回复。

@Charlie-XIAO Charlie-XIAO added needs triage The issue needs to be triaged by some maintainer fine-tuning Fine-tuning DISC-LawLLM labels Dec 15, 2023
@Charlie-XIAO Charlie-XIAO changed the title 关于LoRA模型的训练效果 Details on LoRA Dec 15, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
fine-tuning Fine-tuning DISC-LawLLM needs triage The issue needs to be triaged by some maintainer
Projects
None yet
Development

No branches or pull requests

2 participants