Launching Art through Human-Robot Collaboration

Erin Cole
STEAM Stories
Published in
5 min readNov 9, 2018

Just a couple weeks ago, a painting sold at auction for $432,000. Nothing out of the ordinary there. Except one thing… It was created by an AI.

Edmond de Belamy. Photo from https://qz.com/quartzy/1437876/ai-generated-portrait-of-edmond-de-belamy-sold-for-432500/

The painting, called Edmond de Belamy, from La Famille de Belamy, was the first AI-generated painting sold at auction, and it’s been making waves in the art world. We’ve been using machines like the camera and Photoshop to make and alter our art for ages, but what does it mean when a machine becomes the artist? This very question might have been the key to Edmond’s shocking success; though initially valued at less than $10k, the novelty of machine-as-painter sparked curiosity in several auction-goers, making it a coveted piece despite the fact that many consider it a peculiar, if not bad, painting.

What happens now? Should we add “artist” to the list of jobs taken over by machines? Probably not, but, nevertheless, the role of technology in art-making does seem to be getting more complicated. To better understand our relationship with technology when it comes to creating art, it might be helpful to think of these art-machines as existing on a spectrum. On one end, we have tools that are entirely controlled by human input — with a camera, a human sets up the frame and pushes a button, and, with editing software, a human decides what changes to apply to their images. On the other end of the spectrum, we have autonomous robots that, though influenced by their creators’ designs, create without human input — such as the AI from Obvious Art that just became several thousand dollars richer.

Tools on the “human-controlled” end of the spectrum are more-or-less available to the average person. These once-novel innovations have become a entry point into art for many people by changing how we approach the art-making process. Now, taking a photo is as easy as whipping out a smartphone. You can make a digital painting without buying brushes, canvases, chemicals, or a studio. Through these simplified processes, countless individuals have found a passion for photography and have pushed themselves to make more art.

Conversely, AIs that generate original works aren’t so easy to come by, and their contribution to art isn’t experienced as widely.

As our world becomes increasingly more tech-savvy and interested in the role of AI in our everyday lives, it’s interesting to consider how we can incorporate this new tech into our artistic processes. Here, we’ll focus on tools that fall in the middle of our spectrum–robots and AIs that augment rather than replace our existing art-making processes–since they have a greater potential to be accessible to a wider variety of people.

What might such a tool look like? This past year, I’ve been thinking a lot about this question. I’ve always loved art and cranked out drawings ever since I could hold a crayon, but for many of my friends the thought of even starting a drawing brings nothing but stress or disappointment that their product isn’t “good” or doesn’t look like what’s in their heads. So, I started thinking about how to encourage people to make art without this stress, similar to how a phone camera takes the stress out of photography and gives artistic agency to a vast number of people.

I began thinking about using technology as a tool in a way that would use robots to augment creativity in drawing. Thus, Melvin was born! Melvin is a Robot Aided Drawing device, or RAD for short. I designed Melvin as an attempt to take some of the onus of making “good” art off the user while making the actual mark-making process a bit more fun and experimental so people who wouldn’t consider themselves visual artists might have a reason to get excited about drawing. Here’s how it works: the human moves their hand towards and away from an ultrasonic sensor, which causes servo motors to rotate and move the drawing arms across the page. To make the process more interactive, the human can also move the anchor point of one of the drawing arms.

While it’s nearly impossible to make coherent images with Melvin, I noticed people had an immediate interest in playing with it. My friends, who generally shy away from artmaking, suddenly wanted to see what they could make with Melvin! This demonstrates that even simple robots can help people jump into new fields of art and have fun in the process.

Overhead view of Melvin. Photo by Erin Cole.

There are other tools out there with similar a goal: to enhance, or encourage, our ability to create through robot-human collaboration. For instance, two other students at Brown University, Matt Cooper and Martha Edwards (‘18), have been working on Bairon, a poetry AI that offers suggestions for how the human poet might add to their poem based on what they’ve already written. Martha and Matt felt that pure AI poetry is novel, but notoriously bad; however, a collaboration between robots and humans could combine the strengths of both parties: the AI’s ability to generate new and unexpected phrases could inspire the human poet, who would take the AI’s suggestions and mold them into a meaningful piece.

An example screen from Bairon. Photo by Matt Cooper.

Similarly, Google Magenta’s Piano Genie project extrapolates input from only 8 buttons to notes on an entire 88 key piano. They state that the Piano Genie is a way to make music-making and composition more accessible to novice musicians by simplifying the prior knowledge required — instead of needing to use the full octave range of a piano, the user can establish general relationships between their 8 buttons, which the AI translates into more complex musical structures. This provides the user with a way to improvise new music rather than only learning pre-existing pieces.

Projects like these are exciting and hold great promise for getting people involved in artforms they might not be familiar with. They demonstrate the ways humans can get inspiration, explore new fields, and grow as artists by working with machines. Additionally, they ensure that the artist’s tastes and understanding of social and personal context aren’t lost for the sake of technological novelty.

That said, these sorts of mid-spectrum machines are still few and far between. Fortunately, there’s currently a DIY element to many of these projects; there’s a lot of documentation online for building your own versions of these tools. But even this open-source framework presents barriers to accessibility. First of all, you have to know that such documentation exists. From there, you need access to the right software for the code or the right materials to construct the project. However, the more we explore these partly intelligent, partly collaborative projects, the more insights we’ll have, and the more commonplace they’ll become. Who knows! One day we might all be drawing with the help of our very own Melvin, or writing with Bairon, or jamming out with Piano Genie.

--

--