Member-only story

10 Lessons I Learned Training GANs for one Year

Training Generative Adversarial Networks is hard: let’s make it easier

Marco Pasini
Towards Data Science
10 min readJul 28, 2019

--

Introduction

A year ago I decided to begin my journey into the world of Generative Adversarial Networks, or GANs. I’ve always been intrigued by them since the beginning of my interest in Deep Learning, mainly for the incredible results that they could produce. When I think of the term Artificial Intelligence, GAN is one of the first words that come to my mind.

Faces generated by GANs (StyleGAN)

But only when I started training them for the first time I discovered the double face of this interesting kind of algorithm: it is incredibly difficult to train. And yes, I knew that before trying myself from papers and other people that tried before me, but I’ve always thought that they were exaggerating an otherwise small and easy to overcome problem.

I was wrong.

As soon as I tried to generate something different from the traditional MNIST example I found out the huge instability problem that affects GANs and, as the hours spent trying to find a solution increase, becomes extremely annoying.

--

--

Towards Data Science
Towards Data Science

Published in Towards Data Science

Your home for data science and AI. The world’s leading publication for data science, data analytics, data engineering, machine learning, and artificial intelligence professionals.

Marco Pasini
Marco Pasini

Written by Marco Pasini

Studying Energy Engineering. Artificial Intelligence is pretty cool too.

Responses (9)