Transforming the World Into Paintings with CycleGAN

Implementing a CycleGAN In Keras and Tensorflow 2.0

Sebastian Theiler
Analytics Vidhya

--

Some final results

This article assumes you already have a strong conceptual understanding of how CycleGAN works. If you are new to CycleGAN, or need a quick refresher, I would recommend reading my previous article, detailing the intuition and math powering CycleGAN here.

All the code in this article will be available on my GitHub here.

In summary, CycleGAN is a technique for translating images. CycleGAN is similar to pix2pix, the improvement being the lack of needed paired training datasets. This means we can give any images to CycleGAN as references, and any images as style goals and CycleGAN will learn the translation.

For example, we could give landscape photos as a source, and Monet photos as a target and CycleGAN will learn to convert landscapes to Monet photos, and vice versa.

LEFT: Claudio Testa on Unsplash | RIGHT: Source

pix2pix would require that the output image is of the same landscape.

Luckily, many CycleGAN datasets, including monet2photo, are already available for easy use in the TensorFlow Datasets (tfds) collection. If you…

--

--