The First GAN

The First Generative Adversarial Nets, 2014

Cilia Madani
3 min readDec 21, 2022
Photo de Steve Johnson sur Unsplash

Generative Adversarial Networks [paper] are a framework that has been introduced in 2014 for making generative models via an adversarial process. This process has been inspired by game theory.

Main parts

More specifically, GANs use the concept of the zero-sum game between two players that are called:

  1. Generator: Whose role is to capture data distribution from the training set and generate new data.
  2. Discriminator: Whose role is to distinguish between the real data and the fake data that have been generated by the encoder

The zero-sum game means that one player’s gain is the other player’s loss and vice versa. So, the generator’s gain is achieved by misleading the discriminator, and the discriminator’s gain is achieved by not being misled by the generator.

Training

In the paper, Generator G and Discriminator D are both multilayer perceptrons working against each other and trained using backpropagation. We start by defining a prior on input noise variables P(z). The encoder takes as input P(z) as a noise input data and maps the noise to data space using the generator G(P(z)). The Decoder takes then the output x of G and outputs a single scaler D(x). the output in this case represents the probability of assigning the correct label(real, generated). D is trained to maximize its gain, and simultaneously, G is trained to minimize

log(1 − D(G(z))).

Algorithm: Mini batch stochastic gradient descent

🔨🔧 Hyperparameters

  • n: Number of training iterations
  • k: number of steps at each training iteration. k = 1 in the paper.
  • m: the size of mini-batch
  • optimization: Momentum

For n iterations do:

For k steps do:
. Sample mini-batch of m noise samples
. Sample mini-batch of m data samples
. Update the discriminator by ascending its stochastic gradient

End For

. Sample mini-batch of m noise samples

. Update the generator by descending its stochastic gradient

End For

The idea of training such a framework is hitting a global minimum when the probability assigned by the discriminator to the data is equal for both classes (real, fake) and it equals 0.5 for each. This means that the generator succeeded in making exact copies of the real data so the discriminator can’t tell the difference between both, thus the P(data)=0.5

In the next articles:

  • ConditionalGANs
  • DCGANs
  • StyleGANs
  • CycleGANs

--

--

Cilia Madani

Junior Machine Learning Engineer | Former GDG/WTM organizer | into books, self Dev and other stories