You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In FactorVAE, for the discriminator optimization, we don't want the gradients incurred in the VAE to update the VAE parameters, and thus the detach operation here:
However, wouldn't we need to detach the first set of latent vectors as well (i.e. latent_sample1), now that we have to move the optimizer.step() at the end (due to the in-place modification error already addressed as one closed issue in the repo)?
I ran a debugging session and indeed gradient changes are observed on the VAE encoder from d_tc_loss.backward().
The text was updated successfully, but these errors were encountered:
In FactorVAE, for the discriminator optimization, we don't want the gradients incurred in the VAE to update the VAE parameters, and thus the detach operation here:
disentangling-vae/disvae/models/losses.py
Line 287 in f045219
However, wouldn't we need to detach the first set of latent vectors as well (i.e. latent_sample1), now that we have to move the optimizer.step() at the end (due to the in-place modification error already addressed as one closed issue in the repo)?
I ran a debugging session and indeed gradient changes are observed on the VAE encoder from d_tc_loss.backward().
The text was updated successfully, but these errors were encountered: