Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unintentional decay of embeddings towards 0? #187

Open
samedii opened this issue Aug 21, 2020 · 0 comments
Open

Unintentional decay of embeddings towards 0? #187

samedii opened this issue Aug 21, 2020 · 0 comments

Comments

@samedii
Copy link

samedii commented Aug 21, 2020

By updating all embeddings regardless of if they are being used (in the current batch) you are decaying them towards 0. Is this intended?

https://github.com/deepmind/sonnet/blob/master/sonnet/python/modules/nets/vqvae.py

I have mostly read re-implementations of your code in pytorch and it could be a bug on their side but it looks like you are doing the same.

I have tried removing the hidden decay and only update the embeddings that are used but this seems to lower perplexity when training.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant