You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The fix
The fix is to replace torch_dytpe with torch_dtype.
Before the fix, the model ignores the data type specification, and loads the flux transformer at float32, which requires gigantic memory.
After the fix, I can inference at 28.5 GB memory.
Misc
As a side note for those who wants more memory saving, somehow pipe.enable_xformers_memory_efficient_attention() has conflict with FLUX+controlnet (reports a shape mismatch error in #9.
On the other hand, I haven't tested pipe.enable_model_cpu_offload() yet.
The text was updated successfully, but these errors were encountered:
There is a typo here:
FLUX-Controlnet-Inpainting/main.py
Line 17 in 7c00862
The fix
The fix is to replace
torch_dytpe
withtorch_dtype
.Before the fix, the model ignores the data type specification, and loads the flux transformer at float32, which requires gigantic memory.
After the fix, I can inference at 28.5 GB memory.
Potentially related to #3, #23, #24
Misc
As a side note for those who wants more memory saving, somehow
pipe.enable_xformers_memory_efficient_attention()
has conflict with FLUX+controlnet (reports a shape mismatch error in #9.On the other hand, I haven't tested
pipe.enable_model_cpu_offload()
yet.The text was updated successfully, but these errors were encountered: