-
Notifications
You must be signed in to change notification settings - Fork 7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
load ltx loras trained with finetrainers #6174
base: master
Are you sure you want to change the base?
Conversation
This isn't the correct way of doing this. The correct way is adding an entry like: https://github.com/comfyanonymous/ComfyUI/blob/master/comfy/lora.py#L384 The even more correct way is to convert loras to the ComfyUI format which is the format used when you use the lora save node. |
Currently this doesn't work. I'll keep working on it, but don't have time right now. |
This seems to do the trick. Although, again, I won't argue that this is the right way of doing it, or if all LTX loras look like this. Just close it if it's not admissible. |
@neph1 I have the same problem. I want to adapt the 0.9.1 model to the lora trained by finetrainers, but it seems that it is not possible at present? How do you deal with it now? |
Do you mean you have trained with 0.9.1? They have only released the bf16 version of that, right? It's a whole different model, so any loras will need to be trained from scratch. |
I train based it on a-r-r-o-w/LTX-Video-diffusers , but I'm not sure if this lora is suitable for the 0.9.1 model |
I don't think so. They need to release the relevant transformers model, I believe. |
Hi, diffusers can be mixed and loaded, but comfyui cannot be loaded, how do you run it in comfyui? I seem to find that even if you add the modified code, the trained lora cannot run? @neph1 |
@linesword fwiw, I've uploaded my script that renames the keys so that the lora is recognized by comfyui here: https://github.com/neph1/finetrainers-ui/tree/main/scripts |
I've confirmed that lora loading works with 0.9.1. Example (lora trained with 0.9.0 inferenced with 0.9.1, using comfy built in nodes t2v) (I'd show an animation, but webp isn't supported) @comfyanonymous Can this be considered for merging? |
Hi @neph1 thanks so much for working on this. Can you summarize the conclusion about the training and inference? we train on 0.9.1 on finetrainer, with your ui, the keys are properly mapped. Then why do we still need this pr? |
If you don't need it, great! Maybe something has changed in either diffusers or comfyui to better align the two. I only know that when I wrote it, it was needed. I also got a PM only two days ago saying this PR was the only way they could get their lora working. So maybe something about versions? |
No I meant I have not tried yet and wonder what’s the steps? So sounds like I still need this pr? Does it work with the latest comfyui build? |
If you want to use this with the latest comfyui, I guess you need to clone my fork: https://github.com/neph1/ComfyUI/
and hope there are not conflicts :) Edit: Or just copy paste those 6 lines into your lora.py, if you're comfortable with that. |
Hi
I've been playing around with finetuning LTX-Video using finetrainers, and by default the loras won't load in ComfyUI. I noticed the naming of the keys were slightly different, having a "transformer." prepended.
transformer.transformer_blocks.0.attn1.to_k.lora_A.weight
This PR makes them recognized by ComfyUI, but I have no idea whether it's a viable solution, if it applies to other loras than these, or if it should be fixed elsewhere.