Dreaming up imaginary landscapes with Runway ML & StyleGAN

A no-code and step-by-step account of training a Generative Adversarial Network (GAN) to generate places that don’t exist

Nadia Piet
AIxDESIGN
5 min readJul 29, 2020

--

_________________________________________________________________

PSA: It is a key value for us at AIxDESIGN to open-source our work and research. The forced paywalls here have led us to stop using Medium so while you can still read the article below, future writings & resources will be published on other platforms. Learn more at aixdesign.co or come hang with us on any of our other channels. Hope to see you there 👋

Both Nadia Domide’s and my (yes we’re both called Nadia haha) day-to-day work is focussed on Artificial Intelligence and Design. Intrigued by creative AI practices, we decided to not leave this excitement to client requests and to start experimenting.

In this article, we’ll walk you through the entire process of how we created our first generative AI visuals using Runway ML.

Runway: ML for Artists

Runway ML is a free software making Machine Learning accessible to artists and creatives. They offer the option to (re-)train your own models such as StyleGAN which we were most eager to try out. Around the time we were having these talks, Runway ML put out an open call for their residency program and we decided to apply. While we weren’t chosen — by the time we finished writing our application, we were so excited we decided to do it anyway.

These places do not exist(.com)

After some brainstorming and discussions, different streams of inspiration started colliding. We were really inspired by the parody projects on thispersondoesnotexist.com, such as thisartworkdoesnotexist.com & thisworddoesnotexist.com — so we asked ourselves: Can we create places that don’t exist? Can AI dream up real presentations of unreal places? Can we generate imaginary landscapes? Long story short: yes, we can.

The complete step-by-step process of generating visuals by (re-)training a StyleGAN in Runway ML

1. Data collection

One of the important parts of an ML project is collecting the training data. We started curating a dataset of approximately 3000 images from Google Earth View. These images were displayed on the world’s largest billboard to bring a bit of zen to New York’s hectic Times Square during the holidays. They are both stunning and a well-curated (AKA lazy) dataset to obtain.

2. Picking the right pre-trained model to start with

Runway ML currently offers an easy way to do image synthesis, by using StyleGAN to generate photorealistic images. We chose to start with their available pre-trained model, called Landscapes (see image below).

Setup of StyleGAN model training in RunwayML. The interface where we clicked the button that will train the mode.
Setup of StyleGAN model training in RunwayML

By doing transfer learning on the StyleGAN model it allowed us to train our algorithm in a shorter time. We preprocessed the images to squared and centered ones, opted for 3000 training steps, and hit the exciting purple ‘Start Training’ button.

3. Training on our dataset

We trained the model for a couple of hours. During this training time, we could observe how the FID (Frechet Inception Distance) score was changing (see image below). This metric simply computes how similar a generated image is from the real ones in the dataset. Having a low value is an indicator that the real data and the generated one have similar characteristics.

RunwayML interface where it is shown ‘Training in progress!’ and that we are at step 200 out of 3000.
Training in progress — Observe the FID score on the right

4. Generating outputs from the latent space

Ta-da! Once the training was completed, we could use it to generate new images and videos using random points in the latent space.

Latent space walk of our imaginary landscapes

What is a latent space? It is a magical multi-dimensional hidden space with no meaning, filled with points. The beauty of this space is that the generative model learns to map these points to output images. And a space walk is simply a series of images that show a transition between two or more generated images.

Runway gives you the option to export images and a video of a so-called latent space walk.

Join us in our collective imagination

We’ve generated 120 of these images. Astonished by their beauty and our brains seeking to think up stories, we questioned what we should do with these fictional places. Might these landscapes have value when put in the hands of people? How can you create memories of places you have never been to (and never will)?

If you like the idea of generating imaginary landscapes & assigning meaning to places that don’t exist, please join us in this small experiment of collective imagination.

We’ll start with a shot of our fictional historical landmarks — the Dustry Blue meets Sandy View and the Coral Candy Pickle Grove place — pictured in the images below.

Have you ever paid a visit to the Dusty Blue meets Sandy View or the Coral Candy Pickle Grove?

Big thanks to Runway ML, and everyone that has contributed to the open-source files that made this experiment possible. Now go out to craft your own StyleGAN models and imaginaries. Thank you for reading with us!

About AIxDesign
AIxDesign is a place to unite practitioners and evolve practices at the intersection of AI/ML and design. Currently, we are organizing monthly virtual events, sharing content, exploring collaborative projects, and looking to developing fruitful partnerships.

To stay in the loop, follow us on Instagram, Linkedin, or subscribe to our monthly newsletter to capture it all in your inbox. You can now also find us at aixdesign.co.

--

--

Nadia Piet
AIxDESIGN

Designer & researcher focussed on AI/ML, data, digital culture & the human condition mediated through computing