My Collaboration with Painting Robots and a Graffiti Artist

Pindar Van Arman
TensorFlow
Published in
9 min readApr 18, 2018

I am an AI Artist and it is an interesting time for my art.

Specifically, I build creative painting robot systems. These robots are programmed to collaborate with me using a variety of artificial intelligence, deep learning, and feedback loops. We have been working together for almost 15 years. In this time we have created over a thousand canvases, each artwork painted one brush stroke at a time.

Though I have been sharing my work this entire time, the art world does not yet fully considered it to be art. My robots have been called over-engineered printers and little more than complex photoshop filters. Fellow artist have told me that they did not know whether to be impressed or disgusted by my machines. But in the last couple of years, things have been gradually changing. People are beginning to realize just how creative AI has become.

An interesting moment in the recognition of this new genre occurred just recently. Famed New York Art Critic Jerry Saltz reviewed several AI generated images. He roasted just about everything he looked at which included some important work by AICAN, Mario Klingemann, and Google Deep Dream. When he got to one of my paintings, he looked at it and said that it “doesn’t look like a computer made it.” before concluding “That doesn’t make it any good.”

I would have preferred a kinder review, but loved it nonetheless. The fact that Jerry Saltz even took the time to look at AI art was an important moment for those of us in the genre. As I have mentioned, most of the art world does not even consider our work to be art. At least now some accept it as bad art. That’s progress.

A good analogy that I often turn to is that the AI genre is probably where graffiti art was right before the turn of the century. Street art was some of the most interesting art out there, but was completely ignored by the art world. In a similar manner, it is obvious to me that today’s AI artists are the Avant-garde . It should therefore only be a matter of time before the public realizes it.

Being both a part of the new AI Art movement and a fan of Street Art, I am exciting to reveal a collaboration that I have been working on with Bristol based graffiti artist 3D (aka Robert Del Naja of Massive Attack). 3D has been spray painting walls, canvases, and just about anything he can get his hands on since the early 80’s. As far as his street cred, BANKSY is quoted to have said that he “copied 3D from Massive Attack.” Beyond painting, 3D’s work with Massive Attack often explores new media with innovative and experimental interactive performances. It was our interest in each others work that brought us together to see if we could combine AI, graffiti, and interactive performance art.

Our collaboration began about six months ago as we brainstormed ideas. Work in earnest began a couple of months ago when we began experimenting with some of those ideas by applying CNNs, GANs, and many of my own artificial intelligent algorithms to his artwork. I have long been working at teaching my own painting robots to imitate my own artistic process with computationally creative code. 3D and I are now exploring if we can capture parts of his artistic process.

Execution started simply enough with looking at the patterns behind 3D’s paintings. We started creating mash-ups in an implementation of Gatys, Ecker, and Bethge’s A Neural Algorithm for Artistic Style, commonly called Style Transfer. Style Transfer is of course a popular convolutional neural network (CNN) that takes two input images and combines the contours of one with the colors and texture of the other. A breakdown of the CNN can be seen in Gatys’ graphic below.

Style Transfer Algorithm from Image Style Transfer Using Convolutional Neural Networks by Gatys, Ecker, and Bethge

In their following example, you can see a photo of the “Neckarfront” in Tubingen rendered in the style of Van Gogh and Munch.

From A Neural Algorithm for Artistic Style by Gatys, Ecker, and Bethge

You can learn more about how to implement this algorithm either directly from their paper or from one of the many implementations on Git-Hub. Two TensorFlow projects that I found useful for getting me started were log0’s tutorial and Google Magenta’s Jupyter Notebook.

While the intent of Style Transfer is to combine separate content and style images, 3D and I experimented with using his artwork as both the content and style. We created the following grid where seven of his paintings were combined with themselves. It was interesting to see what worked and what didn’t. Furthermore, it was interesting to see what about each painting’s imagery became dominant as they were combined with one another.

As cool as these looked, we were both left underwhelmed by the symbolic and emotional aspects of the mash-ups. We felt the art needed to be meaningful. All that was really being combined was color and texture, not symbolism or context. So we thought about it some more and 3D came up with the idea of trying to use the CNNs to paint portraits of historical figures that made significant contributions to printmaking. Couple of people came to mind as we bounced ideas back and forth before 3D suggested Martin Luther. At first I thought he was talking about Martin Luther King Jr, which left me confused. But then when I realized he was talking about the the author of The 95 Theses and it made more sense. Not sure if 3D realized I was confused, but I think I played it off well and he didn’t suspect anything. We tried applying CNNs to Martin Luther’s famous historic portrait and got the following results.

It was nothing all that great, but I made a couple of paintings from it to test things. I also tried having my robots paint a couple of other new media figures like Mark Zuckerberg.

Things still were not gelling though. Good paintings, but nothing great. Then 3D and I decided to experiment with some different approaches.

I showed him some faces being created by a Generative Adversarial Network (GAN) based on the often cited work lead by Ian Goodfellow. For anyone interested in making their own GAN, many TensorFlow implementations of GANs exist. The lesson that I learned to make one from was Udacity’s DLND Face Generation project.

While the most recent GANs are capable of producing remarkably high resolution faces, I was not as interested in those. I showed 3D how a neat part of the face generation occurred near the beginning. I was fascinated by the part just as faces first began to emerge from nothing. I also showed him the grid of faces (left) that I have come to recognize as a common visualization of GANs.

We got to talking about how as a polyptych, the grid recalled a common Warhol trope of repeating images except that there was something different. Warhol was all about mass produced art and how repeated images looked interesting next to one another. But these images were even cooler, because it was a new kind of mass production. These faces were mass produced imagery made from neural networks where each image was unique.

I started having my GANs generate tens of thousands of faces. But I didn’t want the faces in too much detail. I like how they looked before they resolved into clear images. It reminded me of how my own imagination worked when I tried to picture something in my mind. My imagination is foggy and nondescript. So I implemented the Viola-Jones face detection algorithm with OpenCV to stop the GAN as soon as faces were beginning to be recognized. From there I sent the nondescript faces into a Style Transfer with several of 3D’s paintings to see which would best render them.

3D’s Beirut (Column 2) was the most interesting, so I chose that one and put it into the artificially creative process that I have been developing over the past fifteen years. A simplified outline of this process can be seen in the graphic below.

As just described, my robots would begin by having the GAN imagine faces. I then ran the Viola-Jones face detection algorithm on the GAN images until it began detected faces. This would stop the GAN right as the general outlines of faces emerged. Then I applied Style Transfer on the faces to render them in the style of 3D’s Beirut. With this image in its memory, my robots started painting. The brushstroke geometry was taken out of my historic database that contains the strokes of hundreds of paintings, including Picassos, Van Goghs, and my own work. Feedback loops refined the image as the robot tried to paint the faces on 11"x14" canvases. All told, dozens of AI algorithms, multiple deep learning neural networks, and feedback loops at all levels started pumping out face after face after face.

The First Sparks of Artificial Creativity, Pindar Van Arman & Robert Del Naja, Acrylic on Canvas, 95"x60", 2018

Thirty-two original faces later it arrived at this polyptych above which I am calling The First Sparks of Artificial Creativity.

An interesting aspect of all these paintings, is that despite how transformative the new faces are from the original painting, the artistic DNA is maintained with those seemingly random red highlights. It was interesting to see these artifacts survive the multiple layers of AI that the image was put through.

Beyond these originals, I have also continued to create more artifacts with something I am calling Robotic Editions. In Robotic Editions, my robots repeat the algorithm that executed the brushstrokes to make approximate replicas of the original paintings. While similar, each Robotic Edition is unique due to variables in how the brushstrokes are applied. 128 Robotic Editions of this piece painted on 16"x20" stonehenge paper can be seen below.

Limited Series Robotic Edition of 128 for Views 2018

As can be seen, deep learning has made an entirely new kind of artistic reproduction possible. These are unique prints. It will be interesting to see how fellow artists take to and begin to adapt to the new possibilities presented by AI. 3D and I have already thought about dozens of variations ourselves including swapping out Beirut with another one of his painting, or training the face generating GANs on faces found in his art. Beyond that, this can be done with any artist’s work. There are really unlimited possibilities.

It has been a fascinating collaboration to date. Looking forward to working with 3D and my robots to further develop many of the ideas that we have discussed. Though this explanation may appear to express a lot of of artificial creativity, it only goes into our art on a very shallow level. We are always talking and wondering about how much deeper we can actually go.

Pindar Van Arman
@vanarman
cloudpainter.com

--

--