You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
*******Tencent ARC train sd1v5 **************
model.cuda()
model.eval() # model is contain all models vae ,cliptext
return model
*******Tencent ARC train sdxl **************
vae.requires_grad_(False)
text_encoder_one.requires_grad_(False)
text_encoder_two.requires_grad_(False) -> the Unet does not set no grad means Unet need grad
The text was updated successfully, but these errors were encountered:
because they have params_to_optimize = adapter.parameters(), and only optimise the adapter's params. But disabling unet grads reduces the memory consumption, they just forgot it
Why are the UNet parameters frozen during training for SD1v5, but not for SDXL? the haggingface training sdxl script sets " Unet.train() "
***** huggingface train sdxl *********
vae.requires_grad_(False)
text_encoder_one.requires_grad_(False)
text_encoder_two.requires_grad_(False)
t2iadapter.train()
unet.train()
*******Tencent ARC train sd1v5 **************
model.cuda()
model.eval() # model is contain all models vae ,cliptext
return model
*******Tencent ARC train sdxl **************
vae.requires_grad_(False)
text_encoder_one.requires_grad_(False)
text_encoder_two.requires_grad_(False) -> the Unet does not set no grad means Unet need grad
The text was updated successfully, but these errors were encountered: