-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The GPU memory usage is too high #3
Comments
torch.cuda.OutOfMemoryError: CUDA out of memory. |
How to run pipeline in several GPUs, like 4*4090 |
You can try this to in two GPUs: |
Could you kindly provide the script of how this method works with the main.py? |
It do not work. The transformer need 24GB, and controlnet 4GB, they have to be the same gpu. |
It worked, I used two 3090s to get results, but inpaint was poor, and poorly followed prompt to redraw |
#27 Fixed some bugs, now need 28GB of VRAM. |
I can successfully run it using torchao using about 20G VRAM, the result is great |
can you share the demo with me? |
How? I'm trying quantization with bitsAndBytes, but the it seems that the gradient for the transformer is required, making the int8 version unusable! |
About 60G? This is too scary, can it be optimized?
The text was updated successfully, but these errors were encountered: