Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Solution to the memory issues in the official demo #27

Open
hubert0527 opened this issue Oct 31, 2024 · 1 comment
Open

Solution to the memory issues in the official demo #27

hubert0527 opened this issue Oct 31, 2024 · 1 comment

Comments

@hubert0527
Copy link

There is a typo here:

"black-forest-labs/FLUX.1-dev", subfolder='transformer', torch_dytpe=torch.bfloat16

The fix
The fix is to replace torch_dytpe with torch_dtype.
Before the fix, the model ignores the data type specification, and loads the flux transformer at float32, which requires gigantic memory.
After the fix, I can inference at 28.5 GB memory.

Potentially related to #3, #23, #24

Misc
As a side note for those who wants more memory saving, somehow pipe.enable_xformers_memory_efficient_attention() has conflict with FLUX+controlnet (reports a shape mismatch error in #9.
On the other hand, I haven't tested pipe.enable_model_cpu_offload() yet.

@aycaecemgul
Copy link

Thank you! I was frusturated with this!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants