NoArtist: generating art with artificial intelligence. Can AI create a piece of art?
NoArtist (@noartist_collection) * Instagram photos and videos
142 Followers, 186 Following, 21 Posts - See Instagram photos and videos from NoArtist (@noartist_collection)
A new approach to create art
The word GANism was first used by François Chollet in 2017, a software engineer at Google : “GANism (the specific look and feel of seemingly GAN-generated images) may yet become a significant modern art trend”. One year later, the collective Obvious Art sold an AI-generated painting 430,000$ in an auction at Christie’s in New York. Two days ago, an auction at Sotheby’s featured Memories of Passersby I by Mario Klingemann, a machine installation that uses neural networks to generate an infinite stream of portraits at . This movement is using Generative Adversarial Networks, described below, and is exploring the possibilities created by such a powerful algorithm.
NoArtist was created in late 2018 by two friends from Polytechnique (French engineering school) and I. It was born from the desire to combine two passions we had in common : artificial intelligence and art.
The aim is to create new pieces of art inspired by real paintings, without replication. The idea is not to suppress the artist, on the contrary. We are convinced that collaboration is key and we work with renown artists to explore new creation processes.
It is also very important to us to raise people’s awareness about the reality of artificial intelligence, demystify AI algorithms and be as transparent as possible about how the technology works.
So far, we have produced two main collections: one on the theme of portraits, and another one on the Cubism movement. Below is an example of our work, you can see more on our website.
How can an AI create artwork?
We are here focusing on a recent “generative” algorithm, that tries to generate new images given a dataset — and not only replicating nor modifying.
Generative Adversarial Network (GAN)
The original idea comes from Ian Goodfellow in 2014, in a paper published about the new results achieved using not one but two neural networks. The idea is to force a generator and a discriminator to compete. Let’s follow the example of the creation of a portrait. We need to provide the algorithms with as many pictures as possible that have common features, centred faces in this case. Then begins a process of iteration that may last for a long time (it depends on the size of the datasets, the structure of the neural networks and computer power available):
- The generator will try to produce new images, transforming a random vector into an image.
- The discriminator is given images from the reference dataset and generated images, and tries to decide for each picture where does it come from: generated or real.
- A feedback is then given to the two networks, helping them getting better at the game for the next iteration.
This process, if maintained stable, allows the generator to provide better and better images. The algorithm theoretically stops when the discriminator cannot make a difference between a reference and a generated image (although in practice this point remains difficult to reach).
Several types of networks can be used to fit in this technique. With NoArtist, we’ve used a modified version of this algorithm: Deep Convolutional GAN (DCGAN). This algorithm uses symmetric Convolutional Neural Networks (CNN) for the networks of the generator and discriminator. The version described in the paper can be improved a bit by adding a few layers and playing with the parameters in order to increase the resolution of the generated image (64x64 in the paper).
Inspired from the DCGAN, the Progressive GAN proposed by Nvidia suggests building the generator and discriminator progressively. It creates first 4x4 images, and during the training it produces bigger and bigger images (8x8, 16x16, … until 1024x1024) while keeping on training the lower layers. This requires a substantial computer power not accessible to everyone. The results achieved are bluffing.
With NoArtist, we did not pursue this category of GAN for two main reasons. The first reason is practical. Indeed, Nvidia trained their algorithm on 8 Tesla V100 GPUs for 4 days, with batches of 64 HQ images, which is not available easily.
The second reason is “approach” based. Keeping the technique stable requires a large amount of data, more than 200k images for the results of the paper. On the contrary, we want to focus on a specific type of paintings, which prevent us from using that many images. We therefore use on average between 1k to 3k images free of rights.
Creative Adversarial Network (CAN)
The idea behind this generative algorithm is to try to produce artistic images, but that cannot be fitted into an existing style.
To that extent, the discriminator has a new task. It must first, as before, classify the images as art/no art. However, this time, if an image is classified as art, the discriminator will try to classify it into an existing art style (the dataset is now labelled).
The aim of the generator is now to generate images that are considered as art, but then confuse the discriminator regarding the style of the image. This produces very interesting results, and we are currently working on similar experiments with NoArtist. This allows inspiring new styles, keeping the general codes of pieces of art but adding some creativity in the process.
This was a focus on three techniques derived from the first paper about Generative Adversarial Networks (2014). There are other variants, most of them being listed here.
Thank you for reading this article. If you have any idea or remarks on our project, we would love to hear about them! You can contact us at firstname.lastname@example.org, follow us on Instagram (@noartist_collection) or visit our website www.noartist.io.