Original artwork generated by AI developed at the Rutgers’ Art and Artificial Intelligence Laboratory.

The Relationship Between Art and AI

Jennifer Aue
Published in
9 min readMay 15, 2018

--

Art is how we’ll imagine new ways to use AI that could help us survive, even evolve.

Why do humans create art?

Psychologically speaking, we do it for a few reasons: to evoke an emotional response, to recall past events and emotions, to communicate, and to educate.

In short — art is something we create to understand who we are. We are struggling for a way to express the inexpressible. Communicate beyond words. Tap into a moment of clarity that captures a feeling, tries to put that feeling inside of others, effect the perspective of another person.

Up until 30 or 40 thousand years ago, humans didn’t have time for this kind of self-introspection. We spent the entirety of our lives hunting, fighting, staying warm, staying safe.

But then things changed. A new kind of human learned to gather food. No longer burdened by constant hunting, Cro-magnon tribes settled down. They sat and watched their fires cast long shadows on the walls of deep caves — and with this first pause, thought about something more than survival.

They picked up burned sticks and began to draw.

Chauvet Cave in Ardèche, France contains some of the most well preserved figurative cave paintings in the world, dating back 32,000 years ago to the Upper Paleolithic Ice Age.

They couldn’t have done this without the cognitive abilities their ancestors had developed through hunting: making and using tools, developing memory, forming language, developing expression, and recognizing patterns in the world around them that allowed them to survive.

These are same abilities we are now trying to emulate with the machines: memory, language, understanding, reasoning, learning, expression, pattern recognition.

These are the core components of AI.

AI as an Impersonator

We began using AI to create art by first teaching it to understand and replicate our own art. The technique is called style transfer and it uses deep neural networks to replicate, recreate and blend styles of artwork. It identifies and combines stylistic elements of one image and applies them to another. No artistic or coding experience required.

Whether it’s applied to paintings, photography, video, or music, the concept is the same: choose a piece of artwork whose style you want to recreate, then let the algorithm apply that style to a different image. Or, choose several styles of art and let the AI produce mash-ups that incrementally blend styles together.

In the example below from Google AI, you can see the four original pieces of artwork they chose in each corner. The grid of images between these corners represent the degrees of blending one image into another and applying the resulting style to a new photo. The center image being the result of an equal mix of all four pieces.

Google AI’s Style Transfer model. Four original pieces of artwork (one in each corner) being combined in gradients of proportions to a new photograph.
Original photograph of Tübingen Google applied the blended styles to.

This use of AI to impersonate and remix artwork has had varying degrees of “artistic” success, from this Dinosaur x Flower mash-up by Chris Rodley that went viral, to the more common psychedelic looking examples you can find all over Reddit.

“You don’t press a button and something looks amazing.

I’ve learned the quirks and the personality traits of the algorithm.”

Chris Rodley on using style transfer program, DeepArt.io

Dinosaur x Flower, by Chris Rodley

There are even some stunning results of style transfer that begin to feel like original art in their own right, like the image below by Reddit user vic8760, who combined a neoclassical portrait of Napoleon, and a High Renaissance painting of a crowd scene.

Napoleon Bonapart A2 by Reddit User vic8760

Style transfer can also be applied to video and music. Particularly musical genres with more mathematical, predictable composers. Bach and math rock are good examples of music that’s consistently structured and follows patterns, making it fairly replicable by AI.

2001: A Picasso Odyssey blended a sequence of shots from 2001: A Space Odyssey with Picasso’s painting style.
An original composition by Sony Computer Science Labratory’s AI, DeepBach.
An original composition of metal by the AI, Databots.

Some style transfer tools are taking things a step further, giving artists a degree of control over the way the mash-ups happen, like Beat Blender from Google’s Project Magenta. They’ve built an interactive demo you can use to generate two dimensional palettes of drum beats and draw paths through a grid of space, identical in concept to Google AI’s style transfer grid (see above), to create evolving beats. The 4 corners can be edited manually, replaced with presets, or sampled from the grid to regenerate a new palette.

Another form of imitation similar to style transfer is image-to-image translation, which can convincingly change the appearance a photo or video, allowing users to edit the image’s context, such as time of day, season or weather.

Nvidia’s image-to-image translator

While imitation is interesting, even commercially valuable, it’s not in the true spirit of art. It’s simply reflecting back to us what we’ve already said.

If we want AI to help us say something new, we have to use it as more than a supercharged Xerox machine.

AI as a Collaborator

The next step beyond imitation is developing of a collaborative relationship between artist and AI.

Amper is a simple example of evolving imitation into collaboration. This online app allows the user to select instruments, rhythms, styles and tempos to “collaboratively” generate new music.

Amper demo from 2017

NSynth Super is another example of how AI can generate new music and sounds for the musician to work with. It’s a program that uses a neural network to understand the characteristics of sounds, then create completely new tones using the acoustic qualities of the original sounds — so you could get a sound that’s part bassoon and part electric guitar all at once.

Using the dials, musicians can select the source sounds they want to explore, then navigate the new sounds that combine the acoustic qualities of the four source sounds by dragging their finger across the touchscreen.

Demo of NSynth Super from 2018

Without AI as a collaborator to create new blends of old sounds, we would never be able to hear the tones you just listened to in this video.

Artists are also beginning to more actively influence and massage the results of the artwork they create with their machine learning algorithms—changing their relationship with the AI to be more of an ideation partner, rather than simply a tool for making new tools.

Mario Klingemann calls himself a “neurographer” because he builds art-generating software by feeding photos, video, and line drawings into code borrowed from machine learning research. His manipulation of the code results in his “deteriorating tech meets Francis Bacon-esque” portraits and abstracts.

A photographer goes out into the world and frames good spots, I go inside these neural networks, which are like their own multidimensional worlds, and say ‘Tell me how it looks at this coordinate, now how about over here?’

Mario Klingemann

Portrait created with AI by Mario Klingemann

The process of how an algorithm is constructed to generate an artistic output is becoming an art form in and of itself.

A few short years ago, algoraves began popping up around the world. An algorave could be described as your typical electronic dance music rave, but with a giant projection of the live code the DJ is writing to create the music you’re dancing to.

Source Festival Algorave, 2017

Similarly, Trevor Paglen has always powerfully commented on the very technologies he’s creating his art with to reveal how they’re impacting the state of humanity. In “Sight Machine” he has created a live mapping of AI generated personality and sentiment insights about the members of the Kronos Quartet as they give a live performance.

By making the conclusions in his algorithm transparent and immediate, it becomes humorously, but painfully, apparent that we’re using technology to create snap judgements about people based on superficial appearances.

AI doesn’t just collaborate by processing images and sounds through math equations. It can also inform and inspire artists who want to discover new insights, connections, or pattens across a large set of data points — like the trending emotions of the entire world for example.

Alex Da Kid used Watson’s emotional insights to develop ‘heartbreak’ as the concept for his first song, ‘Not Easy,’ and explored musical expressions of heartbreak by working with Watson Beat. Alex then collaborated with X Ambassadors to write the song’s foundation, and lastly added genre-crossing artists Elle King and Wiz Khalifa to bring their own personal touches to the track. The result was an audience-driven song launching us all into the future of music.

AI as a Creator

This is where we’re truly stepping into the unknown.

We’ve seen a very small glimpse of what AI as a creator might look like with the work being done at the Rutgers’ Art and Artificial Intelligence Laboratory in New Jersey. Researchers there have created an AI system for art generation that does not involve a human artist in the creative process, but does involve human creative products in the learning process. The outcome of their system is original artwork that they’re testing “Turing-style” against human art. The Lab’s Director, Ahmed Elgammal explains:

If we teach the machine about art and art styles and force it to generate novel images that do not follow established styles, what would it generate? Would it generate something that is aesthetically appealing to humans? Would that be considered “art”?

We asked our human subjects to rate the degree they find the works of art created by our AI to be intentional, having visual structure, communicative, and inspirational. The goal was to judge whether the AI generated images could be considered art. We hypothesized that human subjects would rate art by created by human artists higher on our scales. To our surprise, results showed that human subjects rated the images generated by the AI higher than those created by real artists!

Human subjects tested by the Rutger’s AI Lab considered these AI generated images to be the most like real art

So can AI create original artwork? It’s possible, yes. But the results are still completely oriented around what we as humans consider to be art.

Art + AI tomorrow

What art has taught us about AI so far

Art is how we explore who we are and who we want to become as AI changes the picture of daily life.

Our whole notion of what defines something as “art” is going to change.

It can help us more deeply understand what we want to communicate.

It can change what we communicate by virtue of how we create it and who we create it with.

The questions we need art to help answer

We need art to imagine what AI can become, and understand it’s impact on who we are becoming.

What does a relationship with a machine look like?
What does it mean to us? To them?

Will having the world’s knowledge at our fingertips change what art communicates and how it connects?

Could our collaboration with AI lead to new kinds of art we’ve never before imagined?

Could it change how we understand each other?
Across boundaries? Even across time?

Is it changing our culture? Is it creating its own?

The possibilities we need art to help us understand

The Red Robot in the Blue Underground Cave by evanlai

There’s no denying that by giving machines the same abilities that inspired us to create art (memory, language, expression, understanding, reasoning, learning), it may one day decide to make art of its own.

When the first “AI caveman uses a burnt stick to make art”, why will it do it?

What will it be trying to understand about itself?

Communicate about its culture?

Express about its…emotions?

What it creates may be so foreign that these metaphors are too human-centric to describe that moment — which I find equal parts terrifying and exhilarating.

I just hope we’re mindful enough to remember, that was once us, and what it meant.

Art is always the beginning.

Jennifer Sukis is a Watson AI Practices Design Principal at IBM based in Austin, TX. The above article is personal and does not necessarily represent IBM’s positions, strategies or opinions.

--

--

Jennifer Aue
IBM Design

AI design leader + educator | Former IBM Watson + frog | Podcast host of AI Zen with Andrew and Jen + Undesign the Grind