-
Notifications
You must be signed in to change notification settings - Fork 138
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Codebook embedding does not update #14
Comments
i agree with you upfloor. it is so weird. |
This part of code has not been executed! But I printed the "model.codebook.embedding.weight.data" and found that this part will be updated! |
Actually, ctx.needs_input_grad[0] and ctx.needs_input_grad[1] are set to true and false alternatively. This setting is reasonable because there are two "agents", namely codebook and autoencoder, updating w.r.t. to different parts of the loss function. |
I debug the code and find that ctx.needs_input_grad[1] is always false rather than being set to true and false alternatively. Therefore, though ctx.needs_input_grad[1].requires_grad is always False, the codebook can still be updated. |
I found
ctx.needs_input_grad[1]
isFalse
during training VQ-VAE. Is this correct, and does it mean the embedding of the codebook does not update during training?pytorch-vqvae/functions.py
Line 53 in 8d123c0
The text was updated successfully, but these errors were encountered: