Skip to content
This repository has been archived by the owner on Mar 8, 2019. It is now read-only.

calculation of kl_divergence #13

Open
xiyan524 opened this issue Jan 24, 2019 · 2 comments
Open

calculation of kl_divergence #13

xiyan524 opened this issue Jan 24, 2019 · 2 comments

Comments

@xiyan524
Copy link

Hi, thanks for your excellent work.

I have some question about formula of kl_divergence. As mentioned in the code, the formula is :
kl_divergence = torch.ones_like(mu) + 2 * log_sigma - (mu ** 2) - (torch.exp(log_sigma) ** 2)
while I think standard formula is :
kl_divergence = torch.ones_like(mu) + log_sigma - (mu ** 2) - torch.exp(log_sigma)

Therefore, I'm curious about this part. Is there anyone who can provide some help?

@dangitstam
Copy link
Owner

This formula is the closed-form KL divergence for the ELBO objective. This model assumes that topic proportion vectors are distributed via a multi-variate gaussian; this closed-form objective you've just described punishes the VAE for straying too far away from the normal.

Reading this paper may help! https://arxiv.org/abs/1312.6114

@xiyan524
Copy link
Author

Thanks a lot~

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants