GANs with MEMEs

Harikrishnareddy
4 min readMar 14, 2022
source : memegenerator.net

Generative Adversarial Networks (GANs)

Idea here is we don’t explicitly model density or the underlying distribution, Instead we try to generate new instances which are similar to data over the iterations. This optimizes to sample from very complex distributions which cannot be learned and modeled directly. Instead we’re going to have to built same approximation of the input data.

To achieve this functionality GANs are way to make a generative model by having two neural networks named as Generator(G) and Discriminator(D) which competes with each other.

Generator(G) - Generator turns noise into an imitation of the data and try to trick the discriminator.

Discriminator(D)- Discriminator tries to identify real data from the fakes created by the generator.

Intuition behind GANS

G starts from noise to try to create an imitation of the real data. Discriminator tries to predict the what’s real and what fake, by checking the probabilities real vs not real.

Now G is going to take the instances of where the real data lies as lies as input to train & it tries to improve over the iteration process between discriminator and generator.

Things get hard for discriminator to effectively distinguish between what’s real and what’s fake.

How we train GANs ?

Generator tries to synthesize fake instances that fool discriminator, discriminator tries to identify the fake one’s. To actually train we should define the Loss function. Loss function that defines competing and adversarial objectives for each iteration of generator and discriminator and also the global optimum i.e the best possible, would mean that generator could perfectly reproduce the true data distribution. When we train generator of a GAN synthesizes new instances its effectively learning a transformation form a distribution of noise to a target data distribution.

Loss term is based on cross-entropy between generator and discriminator. In GANs we have two neural networks every one has their own loss functions.

In case of Discriminative loss,

Maximize the probability that the fake data is identified.

In case of Generator loss,

Generator will minimize the probability of Fake, because it can’t effect D(x) (discriminator probability)

GAN Extension

Extension of the GAN architecture is the idea of the conditioning which improves a bit of additional further structure on types of output that can be synthesized.

To train a Conditional GAN, train both networks simultaneously to maximize the performance of both:

  1. Train the generator to generate data that “fools” the discriminator.
  2. Train the discriminator to distinguish between real and generated data.

Difference between GAN and Conditional GAN

In GAN, there is no control over modes of the data to be generated. The conditional GAN changes that by adding the label y as an additional parameter to the generator and hopes that the corresponding images are generated. We also add the labels to the discriminator input to distinguish real images better.

Instead of pair translation is that of unpaired image to image translation this can be achieved by CycleGAN.

This can be achieved by introducing the cyclic relationship in a loss function where we oscillate between domains X & Y in the system. This idea is to go from a particular data manifold to another data manifold.

Few among top ideas of GAN

  1. Idea of Progressive GANs
  2. Architecture Improvement called StyleGAN (combination of Progressive growing and style transfer) ….. and many more….

However ending with a final MEME.

source : imgflip.com

Looking for the code, it is available at Tensorflow.org , Google.developers …,

For any Suggestions or doubts you can reach out to me here/there.

--

--

Harikrishnareddy

Python developer,Data analyst,ML engineer,Data Scientist.Apart from that I'm huge fan of ANIME.My favourite quote "Wakeup to the Reality nothing as planned !"