Pindar Van Arman
8 min readFeb 17, 2018

From Printing to Painting:
Computationally Creative Robots

My early painting robots were simple. They would dip a brush in paint and then drag the brush from point to point. They drew lines and filled areas in with color, like the connect-the-dot and paint-by-numbers exercises we did as kids. Their paintings were charming and in the beginning I had fun exploring robot themed art painted by a robot.

Early work by my first painting robot, R2-D2 Series, 2005

I showed off some of our paintings to a friend and he got excited and told me that he had the perfect name for my new invention. He told me I should call it “The Printer.”

His joke bothered me because he was absolutely right. While I had fancied myself the creator of a painting robot, it did little more than operate like a bad plotter printer that broke constantly and made a horrible mess with each painting? I am not even sure calling it a printer was fair to printers.

But the idea that it was just a printer stuck with me and I have been spending more than a decade obsessed with making them better than an ordinary printer. My robots had to be painters, and even more than that, they had to paint with artistic style.

Incomplete Portrait, Pindar Van Arman, 2005–10, Acrylic on Canvas, 18"x24"

One of the first things I improved was to install cameras so they could watch themselves work. I added them after multiple paintings like this one failed. In the portrait on the left, you can see where the brush fell off before it completed filling in the background with black. But instead of realizing this, the robot just went through the motions of painting for several hours, never entirely completing the background. It was like how your printer runs out of ink, but keeps printing anyway. This had to be corrected and the only way I could think of doing so was to give it eyes, so I installed live cameras on it. Then I programmed the robot to watch its progress. They now got feedback on their progress and would change how they went about painting based on the feedback. To me, this made them more than an ordinary printer.

The fact that my robots could react to how they were painting added a whole new layer of complexity. Now that they could react, I had to teach them how to react. This lead to many years of learning about and implementing various AI algorithms. One of the first I implemented was k-means clustering. I started using k-means clustering to teach my robots to see colors in terms of both their hue and location in the painting (R, G, B, X, & Y). This gave them a painterly disposition that lended itself to mixing colors more effectively. There were a number of implementations like k-means clustering, including maxnets, neural nets, hough lines, and the viola-jones facial recognition algorithm, among others.

Ray, Pindar Van Arman, 2005–10, AI on Canvas, 18"x24"

I also began experimenting with more ambitious AI, such as some early attempts at artificial creativity. I found that by using facial recognition algorithms, my robots could at least be aware of some of the content they were painting. With this contextual understanding, they could generate their own unique compositions while also sticking to a painting’s main theme. In the case of Ray’s portrait, it experimented with the composition while being careful not to distort or obscure the face. I had a lot of fun exploring abstract portraiture with a focus on being as creative as possible while also maintaining a likeness to the subject being painted.

At this point my painting robots were far more than printers. But something was still missing. The more capabilities I added, the more I realized that the benchmark of comparing them to a printer was too low. So I raised my goals. I switched from trying to make them better than a printer, to trying to make them better than me.

To achieve this, I devised a way to teach them to learn from human artists. I built an interface that let me control my robot’s paint brushes by swiping my finger across a touchscreen.

Portrait of Queen Elizabeth II, Pindar Van Arman, 2014, AI on Canvas, 18"x24"

Then I would paint along with my robots. At the same time, I had programmed my robots to pay attention to what I was doing and follow my lead. In essence, I was teaching them to imitate me.

This project soon grew well beyond my own art. I teamed up with a friend to open this interface up to the internet. We made it so that hundreds could simultaneous paint with my robots in a project called CrowdPainter.

Timelapse Crowdsource Art (NSFW)

Our work on this was shortlisted in Google’s Dev-Art competition though nothing was as rewarding as the insanely interesting art that resulted when hundreds of anonymous users simultaneously tried to paint on the same machine. This occurred around the same time that twitch-plays-pokemon was popular and it had a similar effect. Each painting was a crowd locked in intense battle for control of the robot, sometimes to beautiful effect, but more often to absolute disaster.

While the CrowdPainter project was not AI related, it was doing something important that I didn’t realize at the time. It was collecting brush stroke data. Millions upon millions of strokes from human participants around the world were being captured and stored in my databases. I started using this data to help train my robots to paint more naturally. This was done mostly by imitation, but it was the moment that my robots found their artistic style.

St. Peters, Pindar Van Arman, 2013, AI on Canvas, 24"x18"

Our paintings were now far from printouts. No two ever came out the same. Furthermore, with as much AI that went into each, a butterfly effect was occuring where small deviations at the beginning of a painting would cascade into large changes by the end. I couldn’t help but feel this was similar to my own creative process where each brushstroke depended not only on where I was trying to get, but also on the artistic effects of all previous brushstrokes leading up to that point.

I was pleased with where I had gotten with my robots. While they were not creative in their own right, they were an amazing tool for my own art and very good at following my artistic direction. I had trained multiple robotic painting assistants and I loved the work that they were doing for me.

Progression of Paintings Quality from very first in 2005 to Portrait of Elle Reeve in 2017.

I continued using my robots as assistants for a number of years, convinced the AI couldn’t get much more creative. I was of the belief that while AI did cool things, creativity was uniquely human. Therefore AI would never be more than just a tool for us to use. Then I heard about AlphaGo beating Lee Sedol at Go, and read reports about how some of its moves were being described as “creative.” What did this mean?

I started looking into how AlphaGo worked and found deep learning, which I soon realized were just complex neural networks. But this came with the realization that these complex neural networks were finally becoming powerful enough to be useful. The more I looked around, the more I found remarkable applications including some interesting work being done with Convolutional Neural Networks (CNN) and Style Transfer. I set out to learn how to make CNNs with TensorFlow and incorporated them into the process being used by my robots. The results were dramatic and appeared to be creative. As I experimented more and more with deep learning and saw the results of using it with my robots, I began questioning my belief that only humans could be creative.

New York Magazine Art Critic Jerry Saltz reviewing Portrait of Elle Reeve

I am not the only one surprised by the results. New York Art Critic Jerry Saltz recently reviewed one of my CNN assisted portraits and I was pleased to hear him say “It doesn’t look like a computer made it.” Nevermind the fact that the next thing he said was “That doesn’t make it any good.” The portrait looked creative enough to him that he would not have known it was generated by AI had he not been told so.

Looking like it is creative and being creative are not the same thing of course. For example, it was me that decided to do the portrait of Elle Reeve. It was me that fed several photos of her into my algorithms that then picked a favorite, cropped and edited it, and painted it on a stretched canvas. It all began with photographs that I selected and a subject that I chose. This limits the creative potential of any painting made as part of a process.

One of my favorite robot artists, Harold Cohen, once complained that there were two types of representational AI artists. Those that worked from photographs, and those that lied about not working from photographs. His point was that unless the machine was coming up with its own imagery, it wasn’t really being creative. He claimed it was just a generative photo filter. I mostly agreed with him. Though I also realized that artists often worked from photos and that didn’t make their artwork any less artistic. So it didn’t really bother me if machines did the same, as long as the changes they made to the original photo was substantial.

I held onto this view until just recently when I discovered Generative Adversarial Networks, or GANs. I once again turned to TensorFlow to create a face generating GAN and then incorporated it into the creative process of my painting robots. As I saw my robots imagine, create, and pull faces out of random noise, I realized that my robots no longer needed to work from photographs. With GANs they could now imagine unique faces and pull them out of random static.

This has all lead to my most recent portrait series called The First Sparks of Artificial Creativity. These were painted by my most recent painting robot project, CloudPainter.

First Sparks of Artificial Creativity, Pindar Van Arman, 2018, AI on Canvas, 95"x 59"

These paintings represents years of exploring artificial creativity and trying to do everything I could to differentiate my painting robots from printers. Each of the 32 paintings in this image were imagined and painted with a wide variety of AI and feedback loops. My robots imagined the faces from nothing. They then interpreted the faces in the style of multiple artists, both living and dead, including myself. And finally, it began painting them with feedback loops paying attention to each stroke and constantly making adjustments at all levels of the AI.

Don’t look like faces to you? Give my robots a break. They are new to this whole imagination thing.

Pindar Van Arman
cloudpainter.com