-
Notifications
You must be signed in to change notification settings - Fork 104
webui Lora Might be causing errors in checkpoint models. #101
Comments
Hello. I cannot reproduce this issue. I would check to see that your model path is correct. If it is, then could you please post the following? (You can remove any personally identifiable information).
|
Here is a link to one of the problematic models... https://www.dropbox.com/sh/ttvqyfddlq0mvjl/AAAjeXguhPXSanFA2x_--4xLa?dl=0 Which is based on this model... https://www.dropbox.com/sh/247hj87lcvewsb5/AADeZsqTDTAE1mI2WlsclcU7a?dl=0 Which is based on the original diffuser model. I'm not able to get to the yaml or log file at the moment, but maybe you will notice something here? But the error message occurred when loading the model for inference using inference.py . I believe the error message could be related to this, since it's a similar error message, but in mine it shows a lot of the layers in the model. |
This is a link to a config.json file that was created. since disabling the lora training, I haven't had issues with that error message. It could be a possible glitch because of the version of the software I used. But I was hoping to make it known to confirm whether there was something truly going on. Is anyone else able to reproduce the error message with this model? Or is there something wrong in the models configuration that could be easily fixable so that I could use the model? |
Has anyone else had similar issues. I believe it has to do with the Lora Training because I only notice such behavior on models created while also training the new webui lora. The most recent model did not use the Loras, and had no such issues.
The text was updated successfully, but these errors were encountered: