Segmentation Using Generative Adversarial Networks

Chirag Bajaj
Sep 6, 2019 · 4 min read


In order to understand a scene, each piece of visual information has to be associated with an entity while considering the spatial information. Several other challenges have emerged to really understand the actions in an image or a video: keypoint detection, action recognition, video captioning, visual question answering and so on. A better comprehension of the environment will help in many fields. For example, an autonomous car needs to delimitate the roadsides with high precision in order to move by itself. In robotics, production machines should understand how to grab, turn and put together two different pieces required to delimitate the exact shape of the object.

In this blog post, we’ll discuss how Segmentation is useful and in particular how an image Segmentation can be done using Generative Adversarial Networks instead of widely used techniques like Mask-RCNN, U-Net, etc.

Generative Adversarial Network (GAN)

GAN has shown great results in many generative tasks to replicate the real-world rich content such as images, human language, and music. It is inspired by game theory: two models, a generator and a critic, are competing with each other while making each other stronger at the same time.

The architecture of a generative adversarial network.

GAN consists of two models:

  • A discriminator D estimates the probability of a given sample coming from the real dataset. It works as a critic and is optimized to tell the fake samples from the real ones.
Loss Function of GAN.

Image-to-Image Translation

Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image. It can be applied to a wide range of applications, such as collection style transfer, object transfiguration, season transfer, and photo enhancement.

The task here is to get accurate segmentation from left image to right image.


pix2pix is one of the very famous and extensively used GAN architecture for any task of Image-to-Image Translation.

pix2pix are conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. Here this approach is effective at Segmentation and we no longer have to hand-engineer mapping function.


The Generator has the job of taking an input image and performing the transform to produce the target image. The encoder-decoder architecture consists of:

encoder: C64-C128-C256-C512-C512-C512-C512-C512

decoder: CD512-CD512-CD512-C512-C256-C128-C64

An example input could be an image (black and white), and the output of that image is to be a colorized version. The structure of the generator is called an “encoder-decoder,” and in pix2pix the encoder-decoder looks more or less like this:


The Discriminator has the job of taking two images, an input image and an unknown image (which will be either a target or output image from the generator), and also decide if the other image was produced by the generator or not. The 70×70 discriminator architecture is:



Following were the hyperparameters used during training.



Input : Output : Target

Given some more training and better hyperparameter tuning, we can achieve almost accurate results.

Further Reading and References


Intel Student Ambassadors

Stories from Intel Student Ambassadors working on innovative use cases and projects around Artificial Intelligence, Machine Learning & Deep Learning. Learn More Here:

Chirag Bajaj

Written by

Trying to make neural nets look cool.

Intel Student Ambassadors

Stories from Intel Student Ambassadors working on innovative use cases and projects around Artificial Intelligence, Machine Learning & Deep Learning. Learn More Here:

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade