This is a great write-up, thanks! I had a question about your explanation of DCGAN. You had mentioned that if D labels some gen example incorrectly, then we show it a real example and learn by comparing how the gen one is different from the real one (ie, some loss from the difference of the images). However, from reading up on GANs, I found that the D and G play a min max game and specifically, these GAN papers, including DCGAN, mention that “One can additionally argue that their learning process and the lack of a heuristic cost function (such as pixel-wise independent mean-square error) are attractive to representation learning.” My understanding is that D does not learn from comparing real and fake images, but rather by back-propagating its classification loss. D gets real and gen samples and, for each, outputs a prob, p, that a sample is real. If D is incorrect on a real sample, its loss is 1-p. If D is incorrect on a fake sample, its loss is p. And these losses are backprop through D. G is updated when D is labeling gen samples. If D is correct (labels it as fake), G’s loss is 1-p, else G is not updated.

Machine Learning is Fun Part 7: Abusing Generative Adversarial Networks to Make 8-bit Pixel Art

Adam Geitgey

4.2K22