Generative Adversarial Network(Neural Network)

Priyansh Agarwal
5 min readMar 22, 2023

--

  • By Priyansh Agarwal , Enrollment no:22116073

The first question that pops to our mind is “What is a Neural Network? , looks complicated”. But that is not the case . In simple terms , I would say that Neural Network is just like our brain. How our brain overtime learns to percieve various different things like mobile phone,earphone etc, the same thing we are trying to do with a machine by first training it like just what we did to our brain when we were probably few months old. In computers everyting works on commands called algorithm,so even neural network also works on algorithm that is designed to recognise data/patterns and match it with the previously learned data giving an accurate answer to the problem.

Now the question which comes is “How does neural network actually work?”. Like we have billions of neuron which helps us in processing our data, in the same way machine has nodes (aka artificial neuron), which are interconnected just like our neurons in many layers (simple models have 3–4 layers, and complex ones have much more layers),which help us to generate accurate output. Neural network is made to learn from training dataset, where inputs and the corresponding outputs are given. Each node is assigned a weight to give the most accurate answer.So, basically weights sets the standards for the node’s signal strength that can be related to synapses in the brain between the neuron,whose strength depends upon the amount of signals received.The bias gives the extra value/information to give us an accurate output. There are three layers i.e Input layer, Hidden Layer and the Output Layer.Lets take an example to understand this , Input layer can be associated with what our eyes does whenever we see something(i.e acts as a receptor),then the Hidden layer can be associated with the various neural activities in our brain because of which we are able to see the image and the number of hidden layers can vary,the output layer is associated with the image we can percieve finally.

Moving forward lets learn about how are the Weights and Biases decided upon.Initially these weights are randomly assigned and so the network’s performance is quite horrible.The performance is measured through a function called the cost function. Higher the value of Cost function horrible is the performance of the algorithm for the machine.The technique of back Propagation helps us to propagate the error backwards and helps to find error in each of the previous layer.To decrease the cost function, we use the gradient function (just like finding minima in single variable calculas we use derivative) to reduce it to minimum.As the number of weights are very large in number for even very basic neural networks we represent everything in forms of vector and then calculate the gradient as it makes our work much more organised and easy.In this way we get the required weights and biases.

After having a clear idea about neural networking and how does it work lets concentrate on Generative Adversarial Network(GAN).First of all, let us understand the meaning of each word in GAN,Generative refers to the network’s abilty to generate new data,such as images . Adversarial is the training process of the network, which involves two models (generator and discriminator)being competitive to each other.Network is used because it is also a neural network.Basically,GAN has two neural components i.e the generator and discriminator.I would like to explain the working mechanism of the two using an example. Let us take that there is a cafe owner(generator) who had just heard that Cafe Coffee Day’s(CCD) Coffee are good, so he wants to replicate the coffee in his cafe so the sales from his store increases. But the students (discriminator) who go there had already tasted the coffee from CCD so they find out that it was fake, so the students give input to cafe owner to make his coffee better.Overtime, after recieving multiple insights his coffee taste’s like the coffee from CCD and now the students are not able to distinguish between the two coffee.In GAN,what happens is the generator creates fake data and tries to fool the discriminator.The discriminator’s job is to check whether the given data

is true/false and give an output. In this way both the generator and discriminator learn simultaneously in the training process and try to improve each time.So from the diagram above we can see that the generator creates a fake sample . The discriminator already has the real sample .The discriminator tries to distinguish between the two sample and gives a output of 1(Real) or 0(Fake). In this process both of them learn and try to become better in their respective jobs.

There exists other type of deep learning models as well then why use GAN? GAN is becoming popular because of its abiltiy to generate realistic synthetic data.The advantages which GAN offers over other models include, it is very flexible and can be used to generate wide range of outputs such as text, music,images etc. Unlike the other deep learning models,GAN can learn from unlabelled data i.e unsupervised learning as well.This makes it useful for data synthesis.GAN uses adversarial training to improve the quality of generated samples . This process helps in both the generator and discriminator to work efficiently.

There are varied application of GAN model in today’s world.In Google maps,we see a hydrid map which can be created using a trained GAN model.GAN models are recently being used in text to speech conversion technology.It is also being used in anomaly detection in fields like cyber security. It is used to generate high resolution images from low resolution images.Disadvantages include it is difficult to train a GAN as there are two networks and both are competing against each other which can make the training process slow and moreover GAN requires a larger data set to work with so that it can generate accurate synthesised data and generally isnt efficient with small data set.

So, I would conclude by saying that although GAN has some disadvantages but in the long-run it is going to become an essential tool for data-generation and analysis and is going to have wide scale application because of its capability to generate data similar to the real sample.

--

--