A simple PyTorch implementation of Ian Goodfellow's original Generative Adversarial Network on the MNIST dataset.
Basic code implementation of Goodfellow's original, basic GAN architecture along with the minimax optimization objectives outlined in his 2014 paper introducing GANs.
The primary intuition here is that the discriminator is trying maximize its probability of correctly classifying a real image as real and fake as fake, while the generator is trying to minimze the probability of the discriminator getting it right (i.e. fooling it).
Initial noise sampled from a standard normal distribution
Around this point, I was a bit concerned that it may be delving into a mode collapse failure, generating almost all digits which resembled '1'.
I had put the number of epochs at 100 but it was taking way too long after a point, I did not notice any major improvement so I decided to stop training. I'm not sure if it converged; I'll try plotting some learning curves to see how the error changes over epochs to get a better idea.