-
Notifications
You must be signed in to change notification settings - Fork 360
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
请教一下大佬,chatglm2全量微调之后的模型,还能再被Lora微调吗? #145
Labels
Comments
可以的,不影响 |
你是用了chatglm2自己提供的量化吧。不建议使用它自己提供的量化方式 |
是的,全量微调时候,用了chatglm2 int8量化。 那意思是我全量微调的模型不应该量化是吧,我这机器不太行,不量化跑不动 |
将transformers升级到最新版本,然后使用transformers提供的量化方法(本质上它用的就是bitsandbytes量化方式)。你注意改一下。试一试 |
感谢大佬,我先试试 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
请教一下大佬,chatglm2全量微调之后的模型,还能再被Lora微调吗?
The text was updated successfully, but these errors were encountered: