Deep Convolutional GANs

Vishal Sinha
Analytics Vidhya
Published in
3 min readSep 20, 2019

This article deals with the Deep Convolutional Generative Adversarial Network and its pytorch implementation of Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks to generate fake images using noise. This paper is a best paper to understand the concepts and working of GAN.

Summary

  1. A Discriminator is trained on real images and is fed that these are real images .
  2. Random noises are passed to generator to generate some images and these images are then passed to discriminator and is fed that these are fake images.
  3. The above mentioned points are work of discriminator and thus discriminator is trained.
  4. Meanwhile, Generator considers that the images generated by it are real images and in this way, generator is trained.
  5. In this way, a clash between Generator and Discriminator occurs and it is called minimax game.

Network Architecture

Generator

Source

In this network, random noise of 1 X 1 X 100(100 channels) in fed as the input to the network. This input is then passed through four blocks of CT-BN-R(ConvTranspose-BatchNormalisation-Relu) with obviously different parameters in each block. In the final block, a transpose convolution followed by tanh activation is used.

Generator Network

This is the pytorch implementation of the above mentioned generator architecture. Transpose convolution is used to upsample the image in every block. This is a simple generator architecture and is easy to implement. A random noise of 1 X 1 X 100 is upsampled to produce an image of 64 X 64 X 3.

Discriminator

Discriminator Network

This is the pytorch implementation of the discriminator as given in the reference paper. In the first block, image is passed through only convolution and leaky relu , then it passed through 3 blocks of conv-batchNorm-leaky-relu. In the final block, it is passed through convolution and sigmoid. Dropout is also added so that discriminator do not dominate over the Generator

Loss Function

Adversarial Loss(Discriminator Loss)

Adversarial Loss

Real images are passed into the discriminator and its output is considered as disc out, then cross entropy between real label and disc out is calculated. After that noise are fed into the generator and generated images are further fed into discriminator whose output is termed as fake out. Then cross entropy between fake label and fake out is calculated. This both loss is altogether called adversarial loss.

Generator Loss

In this loss, the generated images are considered as real images and that’s why cross entropy between fake out and real label is calculated. In this way, which was considered fake in adversarial loss are now considered as real in generative loss. This is the reason , it is often called minimax game or zero-sum game.

For optimization Adam optimizer is used with learning rate of 0.0002.

For code, Kindly visit the github page mentioned below in implementation section.

Reference

DCGAN Paper

Implementation

Github

--

--

Vishal Sinha
Analytics Vidhya

Deep Learning and Machine Learning Enthusiast. Writer at Medium and Analytics Vidhya