extract LoRA from checkpoints in colab #52
Replies: 7 comments 2 replies
-
what im getting is memory error im sure. i gets red in memory graph and then crashes. |
Beta Was this translation helpful? Give feedback.
-
You might try the "unload model" button. This button allows you to unload the model being loaded for image generation to increase the amount of memory available. |
Beta Was this translation helpful? Give feedback.
-
I have paid Colab and no memory issues but still no luck. As of recently SM doesn't load in Colab at all - Python issue? |
Beta Was this translation helpful? Give feedback.
-
If this script is not on web-ui, there are something errors in command prompt. |
Beta Was this translation helpful? Give feedback.
-
Extension is loading again in colab now |
Beta Was this translation helpful? Give feedback.
-
I am getting "duplicate lora name" error in multiple stable diffusion environments, in windows and mac and colab, all of them newly installed. I tried to lower diffuser version to 0.10.2 but not helping. Same no matter what model I use, even the same model for A and B. Error meesage below. I saw similar reported issue with that "320 modules" for U-Net. make LoRA start |
Beta Was this translation helpful? Give feedback.
-
for error: Traceback (most recent call last): try : pip install diffusers==0.14.0 |
Beta Was this translation helpful? Give feedback.
-
Discuss this issue here because it is reported a lot.
The diffusers version and memory are the two most likely causes so far; setting the diffusers version to 0.10.2 or later may solve the problem. The memory problem may be solved by reducing the model size used.
Beta Was this translation helpful? Give feedback.
All reactions