You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
in the code , you define the discriminator loss is : D_loss = tf.reduce_mean(D_real) - tf.reduce_mean(D_fake)
however the generator loss is: G_loss = - tf.reduce_mean(D_fake)
i think the G_loss maybe G_loss = tf.reduce_mean(D_fake), we should remove the negative sign。
according to the original paper algrithom, the loss following:
Any reason for specifying the loss like this, and minimizing the negative of this? These three options should all be equivalent, correct?:
As currently implemented
As suggested in the first post, to remove the minus sign in the generator loss, and to then let the both minimize the defined losses (without any minus signs)
The loss as currently defined, but to let the discriminator maximize D_loss (without the minus).
Or are there any practical differences between these 3 options?
generative-models/GAN/wasserstein_gan/wgan_tensorflow.py
Lines 82 to 83 in c790d2c
in the code , you define the discriminator loss is :
D_loss = tf.reduce_mean(D_real) - tf.reduce_mean(D_fake)
however the generator loss is:
G_loss = - tf.reduce_mean(D_fake)
i think the G_loss maybe G_loss = tf.reduce_mean(D_fake), we should remove the negative sign。
according to the original paper algrithom, the loss following:
The text was updated successfully, but these errors were encountered: