Day 23 of 100DaysofML

Charan Soneji
100DaysofMLcode
Published in
4 min readJul 9, 2020

Generative Adversarial Networks or GAN’s. So this is one of the commonly asked interview question for individuals who are applying or looking for Masters in ML and I got familiar with the term very recently so I thought of putting in out there.

So what are GAN’s?
GAN’s or Generative Adversarial Networks are used unsupervised learning models which try to learn from all the data that is available to them and try to come up or generate a model result which is very similar to what was fed to it or very similar to the training data given to it. However, the model tries to build this output instead of blindly giving the training data as output.
Think of the MNIST dataset (whereby we give the model different handwritten numbers); at the end of training and generating our GAN, the model will be able to make its own handwritten number (based on the label-1,2,3,4,5..). A very simple workflow of GAN’s is mentioned in the diagram below:

GANs for MNIST

So GAN’s consist of complex Neural Networks but here’s the thing. They don’t just contain 1 NN. They contain 2 Neural Networks. One of the models is called the Generator and the other one is called the Discriminator. So the Generator takes in the sample data and creates a sample of data out of it whereas the Discriminator decides if the data was taken from the real sample (which was given as input) or not. This is done like a binary classification problem using a sigmoid function.

How exactly do GAN’s work?
Have a look at the diagram below:

GAN working

From the training data, we may see that our generator model is used in order to prepare a prediction which is then given to our discriminator network. After this step, binary classification is applied in order to determine if they are real or fake. The entire representation of the GAN can be summarized in one formula using:

How do you train a GAN?
This usually happens in 2 phases. This is where I feel that there is a disadvantage to the GAN network. Here is why. We usually train the discriminator and freeze the generator which means that the training set for the generator is turned as false and the network will only do forward pass and no backward pass. This means that since backward pass cannot be applied, the accuracy of the model obtained cannot be further improved. Here’s the catch.

In the second phase of training, we train the generator and freeze the discriminator. In this phase, we get the results from the first phase and and use them in order to make better from the previous state and fool the discriminator better.

So, finally I shall summarize the key steps while approaching a problem using a GAN.

  1. Define the problem: You need to be able to define the problem based on the requirements given to you collect all the relevant data which would help you for training of your model.
  2. Choose your architecture: You need to choose your GAN architecture in order to understand how your GAN is going to work
  3. Train Discriminator on Real data: Here, we provide all the real time data which is generated by our Generative model which is true and original and pass it onto our Discriminator network.
  4. Generate Fake inputs for Generator: Here, this is done in order to improve our model and we define all the fake values or inputs for our generator.
  5. Train Discriminator on Fake data: We now pass these values onto our Discriminator in order for it to classify and understand the fake data.
  6. Train Generator with output of Discriminator: After the real and fake data has been properly classified, we retrain our Generator model in order for it to be able to classify between real and fake/ wrong data.

Now, the implementation of a GAN can be quite overwhelming and confusing but you can understand the implementation using a number of tutorials out there. To write this blog, I used bit of help from Edureka tutorials as well as a few other videos which I found quite useful on YouTube. The videos are mentioned below:

This explanation is really good. I do recommend y’all give it a shot. That’s it for today. Keep Learning.

Cheers.

--

--