You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Mar 8, 2019. It is now read-only.
I have some question about formula of kl_divergence. As mentioned in the code, the formula is :
kl_divergence = torch.ones_like(mu) + 2 * log_sigma - (mu ** 2) - (torch.exp(log_sigma) ** 2)
while I think standard formula is :
kl_divergence = torch.ones_like(mu) + log_sigma - (mu ** 2) - torch.exp(log_sigma)
Therefore, I'm curious about this part. Is there anyone who can provide some help?
The text was updated successfully, but these errors were encountered:
This formula is the closed-form KL divergence for the ELBO objective. This model assumes that topic proportion vectors are distributed via a multi-variate gaussian; this closed-form objective you've just described punishes the VAE for straying too far away from the normal.
Hi, thanks for your excellent work.
I have some question about formula of kl_divergence. As mentioned in the code, the formula is :
kl_divergence = torch.ones_like(mu) + 2 * log_sigma - (mu ** 2) - (torch.exp(log_sigma) ** 2)
while I think standard formula is :
kl_divergence = torch.ones_like(mu) + log_sigma - (mu ** 2) - torch.exp(log_sigma)
Therefore, I'm curious about this part. Is there anyone who can provide some help?
The text was updated successfully, but these errors were encountered: