GAN for Photo Editing

Generative Adversarial Networks

Pranoy Radhakrishnan
BuzzRobot
2 min readDec 3, 2017

--

GANs learn a generative model by training one network, the “discriminator,” to distinguish between real and generated data, while simultaneously training a second network, the “generator,” to transform a noise vector into samples which the discriminator cannot distinguish from real data.

What can we do with the GANs?

If a user has an image of a person with light skin, dark hair, and a widow’s peak, by painting a dark color on the forehead, the system will automatically add hair in the requested area.

Similarly, if a user has a photo of a person with a closed-mouth smile, the user can produce a toothy grin by painting bright white over the target’s mouth.

Let’s Look at some examples.

Interactive image generation

The user uses the brush tools to generate an image from scratch and then keeps adding more scribbles to refine the result. A Trained GAN can generate the most similar real images.

Image to Image Transformation

Clockwise from top left: Unedited image, female version, added smile, hotness filter.

We can train the Generator to generate a smile(real distribution) whereas the discriminator will distinguish between “smile”(real distribution) and “not smile”(generated distribution)

Generative image transformation

An interesting outcome of the editing process is the sequence of intermediate generated images that can be seen as a new kind of image morphing called “generative transformation”.

The source on the left is transformed to have the shape and color of the one on the right.

The source on the left is transformed to have shape of the one on the right.

--

--