Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ChatGLM LoRA微调之后,量化quantize=8显存、推理耗时都反向增加 #257

Open
moon4869 opened this issue Jul 3, 2023 · 1 comment

Comments

@moon4869
Copy link

moon4869 commented Jul 3, 2023

不使用量化的推理显存占用14GB,使用量化8之后显存占用20GB,量化4则占用17GB,请问是什么原因导致?
显卡是A100 80G

@horizon86
Copy link

torch.cuda.empty_cache试试

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants