Introduction to the GAN of the week

Alexander Osipenko
Cindicator
Published in
3 min readAug 15, 2018

GAN of the Week is a series of notes about Generative Models, including GANs and Autoencoders. Every week I’ll review a new model to help you keep up with these rapidly developing types of Neural Networks.

This week GAN of the week is a Simple GAN

For the sake of introduction let’s build the simplest GAN possible using Keras and TensorFlow. But, first…

What is GAN anyway?

GAN — generative adversarial network was introduced by Ian Goodfellow in 2014 and they becoming more and more popular over time. On a high level, it’s a very cool concept where you have two neural networks competing with each other. One of them is taking random noise and try to generate something similar to the input data, a so-called Generator. Another is taking fake data generated from Generator and real input and learns to distinguish the difference between them, a so-called Discriminator. So during the learning process Generator became more and more skilful in generating fake data and Discriminator became better in the classification of real and fake sets of data. By doing that we can extract important features from the data.

GAN workflow

Some of the math behind it

Generator and Discriminant are training separately but as part of the same neural network.

We are doing k-steps of Discriminator training in each step parameters changing in direction of reducing cross-entropy.

where D is discriminator and G is a generator, Xs is an input data, Z is a sample of random noize from some normal distribution P(Z)

Then we’re updating parameters of the generator in a direction of increasing of the logarithm of the probability that Discriminator will choose fake data instead of real.

where D is discriminator and G is a generator, Z is a sample of random noize from some normal distribution P(Z)

What can we do with GAN?

Currently, GANs are very popular for image generation and video real-time processing, but the general idea is very fresh so new use cases coming out every week. In this series, we will try to explore them!

Practical part

So as was promised let’s build the simplest GAN to generate images, we will use the MNIST dataset (because it feels like there are not enough examples with the MNIST dataset). We will try much more interesting datasets in the future, I promise.

Traning process

Results

As the result, I made a classic gif. You can see how it starts from random noise and gradually learns to generate images that look like handwritten numbers.

References:

  1. Generative Adversarial Networks

And what did you try with GAN?

--

--

Alexander Osipenko
Cindicator

Leading/Coaching/Building Data Science teams from the scratch