In 2015 Willem de Kooning’s “Interchange” sold for $300 million, a price which set a new record. In the very same private sale, billionaire Ken Griffin also purchased Jackson Pollock’s “Number 17A” for $200 million.
According to Dublin-based research firm Arts Economics, the global art market supported $63 billion in sales last year, down from $68 billion the year before.
One semester of study at Parsons School of Design will run you about $20,000. The average Parson’s graduate carries just over $34,000 in student loan debt.
Replicas of Marcel Duchamp’s iconic ready-made urinal sculpture “Fountain” have an estimated value of up to $2.5 million. The price of an American-Standard urinal: $105.99.
As a market society we have many different modalities in which to evaluate art: it can be a commodity, an investment, an industry. Of course, monetary value isn’t the only way we judge the worth of a work of art. We can discuss whether a piece of art is beautiful or ugly, felicitous or floundering, and talk about the ways in which it has advanced a line of inquiry in new directions.
But how do we square our subjective impressions of art with the objective, data-driven thinking that governs our economy? Labs researcher in residence Seth Kranzler took up these questions as part of his month-long investigation, examining how cutting-edge data technologies can help clarify our intuitive understanding of the value of art. Focusing specifically on generative art, Kranzler set out to develop a neural network that would generate not only a work of art, but also a review of that artwork.
Art can be leveraged as a financial asset in a number of ways. The finance group Athena helps collectors use their pieces as collateral in a variety of transactions. Enterprises such as Athena’s depend on the ability to estimate the value of a work, and one of the best ways to gauge the amount that a piece can command is by looking to critical reviews as a proxy for expert opinion. In a financial landscape that is increasingly determined by algorithms and an artistic scene that is more interested than ever in the generative possibilities software, Kranzler’s project seeks to close the loop — what if the reviews themselves were also generated by computers?
Generative art, for the unfamiliar, refers to an approach to making art that relies on a rigorous adherence to a pre-ordained process. This kind of work is sometimes also called “procedural art” or “algorithmic art.” While generative art need not be digital (notable early examples include surrealist parlor games as well as the work of the John Cage, Sol LeWitt, and the French group Oulipo), computers are especially well-suited to execute processes as part of an autonomous system.
Generative art today has a relationship with computers that is more intimate than ever. With the recent explosion in computational power and increasingly sophisticated methods for producing and parsing data, artists and researchers are confronted with a glut of raw materials and strange new tools to make use of — not to mention the gargantuan task of trying to make sense of it.
Kranzler conceived his project as a critique of generative art, and a provocation to prevailing trends in contemporary generative art, especially generative art that employs machine learning. “A lot of machine learning art concerns itself with large datasets, where the dataset informs what is made,” he said.But Kranzler’s project is a concern that machine learning art produces work that is technically overdetermined, but lacking in a critical dimension which Kranzler calls “intention.”
“I see artists being more impressed with the absurdity of [what is produced.] It’s a style, but it is becoming the default style of a lot of generative projects.”
It is not a coincidence that generative art developed in tandem with industrial modes of production. Just as the advent of photography prompted painters to abandon the project of producing life-like paintings, so did the introduction of other new technologies push artistic discourse further and further down the road to abstraction, while at the same time providing artists with new tools to explore. Our economy has progressed from the assembly lines of industrial capitalism to the assembly code underlying the algorithms that power high-frequency trading, each incremental step of the way generating new possibilities for understanding the world as data.
In this respect generative art is the latest chapter in the epic story of the complex interactions between technology, artists, and the world they seeks to represent. But in this episode the means of representation have become so abstract — both in art and in commerce — that it can be extremely challenging for audiences to know how evaluate them. We are now in the moment when even the linear narrative of how we arrived here begins to falter and crack like an old fresco. “We’ve reached the last leg of it,” said Kranzler.
“Part of what I want to explore is the idea that you can project so much onto a piece of art,” he said. “When art is highly generative and removed from the intentions of the artist, it puts the full weight of the art into the interpretation of the viewer.”
Kranzler’s project is a playful way of nudging the conversation back towards the intention of the artists working in this space. “There is a sense that data has more influence over the work than the artist does… [This kind of art] doesn’t start with someone saying ‘I’m looking to create this.’ I wanted to criticize how process-centric a lot of this work can be.”
Kranzler began training a neural network on a corpus of both images and text with the network would arrive its own way making associations between them. “I thought with a sufficient corpus of paintings, the computer could understand some abstract correlation between the vocabulary used to described the paintings and the description,” he said.
Kranzler used Ryan Kiros’s Neural Storyteller (“A recurrent neural network for telling little stories about images.) The algorithm works by mapping Microsoft COCO captions onto images to establish a baseline that the network can use to interpret the contents of an image, while simultaneously training a parallel embedding on a corpus of text. The ideal result is an algorithm that is able to look at an image and generate text about its contents according to a certain style. In this case Kranzler’s algorithm would be looking at art and generating text in the style of an art review.
For the images, Kranzler trained his network on a corpus of modernist paintings. For the parallel text-embedding, Kranzler used The Guardian’s Art Critical, Guy Debord’s Society of the Spectacle, John Berger’s Ways of Seeing, Robert Hughes’s Nothing If Not Critical, and all of the art descriptions available on MoMA’s website.
This is an ambitious project, and Kranzler is still working to perfect it the results. “The whole category [of generative digital art] is in its infancy, and things are really developing. But I want to express a critical voice and accelerate [this conversation.]”
Photo/Graphic Credits: Joelle Fleurantin, David Huerta and Fletcher Bach
Our team consists of engineers and mathematicians, story-tellers and data artists. We interrogate big datasets to uncover hidden trends, make animations that set beautiful geometries in motion, and train machine-learning algorithms to hew insights from raw numbers. Our tools allow us to examine the details of our economy and our world with extreme precision, and to simplify complex information accurately. We are dedicated to finding exciting new ways of helping people see the insights beyond the rating. Learn more at http://dbrslabs.com/ — Amelia Winger-Bearskin, Director