Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add ARB for training #144

Open
Yidhar opened this issue Sep 18, 2023 · 2 comments
Open

Add ARB for training #144

Yidhar opened this issue Sep 18, 2023 · 2 comments
Labels
enhancement New feature or request

Comments

@Yidhar
Copy link

Yidhar commented Sep 18, 2023

In the training environment, there is no automatic clustering according to the resolution ratio, that is, buckets. I think this feature is needed. Training at the same resolution consumes much more memory per batch size than sdxl. The unet+ prior model of Kandinsky2.2 should total 2.2b slightly less than sdxl. Maybe there are some performance issues? The second is when using the unload model feature in the Settings. lora failed to be properly released from video memory (and possibly other model caches). When I had to uninstall the model repeatedly and then load the model with lora to complete the sampling. More and more content stays in video memory. Until I had to close the app completely to erase the memory

@seruva19
Copy link
Owner

Aspect radio bucketing was already in my personal feature wishlist, so sooner or later I will implement it. Cannot provide exact timeline, though, since I don't have much time to work on this project currently.
As for training, I simply used training scripts from the diffusers' repo, so there surely must be some room for optimization.
And regarding the issues with LoRA and model unloading, I will try to fix them.

Thank you for your feedback; I appreciate the time you spent trying out this app.

@seruva19 seruva19 added the enhancement New feature or request label Sep 25, 2023
@seruva19
Copy link
Owner

seruva19 commented Jan 2, 2025

Probably will be implemented as part of the implicitly planned LoRA training for KD 4.0, so still not closing it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants