First posted on May 16, 2019
A case study on the sale of Portrait of Edmond Belamy at Christie’s, and a place for AI in the art world.
In October 2018, Obvious, a French trio, put up an algorithm-generated painting to the Christie’s auction block. People like to focus on the fact that it sold for $432,500.00, more than 40 times the estimated hammer price, and Christie’s certainly likes to lean into the idea of “signalling the arrival of AI art on the world auction stage” with a splash, but far more curious and more worthwhile of deeper consideration are the human associations with this spectacle.
“The shadows of the demons of complexity awaken by family are haunting me. Everything was so simple back then.”
- Obvious, 2018, Portrait of Edmond Belamy.
At the time, Obvious were a group of 25-year-old tech and business school graduates, none of them art students. They worked to spread sensationalized press releases overplaying the function of AI as an automaton’s consciousness, capable of full authorship and creative license. One of its members stated in hindsight that this was one of the stupid they did to get a foot in the door and to grab the attention of the notoriously aloof and insider art world . They played the system and operated within the art world by the art world’s rules. They were attractive young men who addressed hot topics of the day and gained some press notoriety and a small collector base before approaching a legacy auction house, desperate to re-position themselves within the technologically advancing digital world.
But beyond all this marketing is the code behind the algorithm, code that was actually authored by Robert Barrat, a 17-year-old kid from West Virginia. This was a code Barrat wrote in response to a bet with his high school programming club classmates who were not convinced that technology could create things as well as humans. Within a week he wrote a neural network that could rap, and he posted that code to GitHub.
Barrat taught himself how to code and built his programs using only open-source software. He used Python to develop a generative adversarial network (GAN) — a standard AI algorithm first designed by Ian Goodfellow, from whom Edmond Belamy (bel ami) most certainly got his name — to rap like Kanye.
GANs operate with two dueling networks. One is responsible for discovering patterns in a dataset and generating copies, while the other judges the passability of those copies against the originals. So when Barrat passed in 6,000 lines of Kanye West’s lyrics, the first network tried to understand what rap lyrics were composed of by parsing through the examples, looking for defining characteristics and common structures, and generated its own sample based on this understanding. Then the second discriminator network would judge whether that new sample could pass as something that Yeezy spat out himself.
Barrat went on to pass through thousands of images of painted landscapes scraped from the internet to see what his program would create and qualify as good enough.
What does a teenage boy growing up on a rural farm in West Virginia know about art? Maybe a lot, maybe not. But it was his logic, his hedging and guiding of program flow that determined when a product achieves its AI benchmark of passability that created Edmond de Belamy. But contrary to sci-fi movies and Obvious press releases, the algorithmic logic and testing of AI operates less like the start of technologically sentient creativity, and more like the surrogate reasoning of its author. Barrat had to create methods and systems to determine what a rap lyric or landscape painting was, and then had to define at what precise point would something go from not a lyric to a lyric, from not a painting to a painting. It would be difficult even using common conversation to quantify the existence of something, let alone with code.
The human bias in art and technology is something the public likes to brush aside and ignore. In 2017, researches from Rutgers University worked with Facebook to build an AI that would create new images from more than 80,000 paintings from the 15th-20th centuries. They remarkably passed test groups from Amazon’s Mechanical Turk — a marketplace that crowdsources and brokers human intelligence — who largely believed the images to be manmade originals. But who are those test groups? They were samples of people who may not have any interest or visual discernment in art, let alone be motivated to define or redefine what Art is.
Established AI artists like Mario Klingemann saw the Christie’s auction sale to be an irrelevant gimmick. Presenting an image inkjet printed on canvas produced from ready-made code in an extravagant gilt frame does not make it more than an image from ready-made code. Artists of this field saw the work as extremely rudimentary without any effort refine the output image.
“To me, [Portrait of Edmond Belamy] is dilettante’s work, the equivalent of a five year old’s scribbling that only parents can appreciate, but I guess for people who have never seen something like this before, it might appear novel and different.”
-Mario Klingemann, 5908/79530 Self Portraits.
In the end, the insiders of tech saw the insiders of art make faulty human judgements based on a subject they don’t wholly understand, when so often the insiders of art look down on the public for not wholly understanding their own perception of elevated culture. Back to what Walter Benjamin theorized about in regard to a loss of “aura” in 1935, even without a true time and context, without personal skill or authorship, the Portrait of Edmond Belamy did seem to elicit enough aura with the validation of Christie’s and and the art market backing to expand the idea of art to include a place for algorithms