Big Data, Small Communities: Navigating the World of Computer-Generated Art

Sam Moghadam
13 min readMar 24, 2020

In 1984, Warner Books published a peculiar volume of poetry and prose by the author Racter, called The Policeman’s Beard is Half Constructed. If “Racter” sounds like a strange pen name, that’s because it isn’t a person, but a computer program — what might be called an artificial intelligence, or AI — created by William Chamberlain and Thomas Etter. Laying claim to being the first-ever example of a fully computer-generated book, The Policeman’s Beard is Half Constructed falls somewhere between abstract poetry and a ponderous stream of consciousness. In his introduction to the book, Chamberlain wrote:

“Computers are supposed to compute. They are designed to accomplish in seconds (or microseconds) what humans would require years or centuries of concerted calculation effort to achieve. They are tools we employ to get certain jobs done. Bearing this in mind, the question arises: Why have a computer talk endlessly and in perfect English about nothing? Why arrange it so that no one can have prior knowledge of what it is going to say?”

On a rainy Thursday evening in March, somewhere between 40 and 50 people communed in a basement in the Chelsea neighborhood of Manhattan to take part in an event that straddled the line between literary reading and technology conference. Sitting between the stairway entrance and a wall of homemade arcade-game cabinets, the audience, seemingly comprised of twenty- and thirty-something New Yorkers, laughed, cheered, and drank craft beer as a series of software-savvy writers and artists presented short talks and works-in-progress. It wasn’t a huge group, but it was enough to fill the available seating and then some; latecomers had to stand in the back or by the stairs. There was a distinct impression that many of the attendees and organizers already knew each other, or at least knew each other’s faces.

The poet Allison Parrish demonstrated a custom artificial intelligence system that can, among other things, combine input words into novel portmanteaus. Katy Gero, a PhD student in computer science at Columbia University, revealed an online thesaurus she built to suggest words in the style of famous writers like Charles Dickens, James Joyce, and Mahatma Gandhi. Perhaps the most entertaining speaker of the evening was Jamie Brew, the founder and CEO of Botnik, a satirical artificial-intelligence-based media company. Brew, wielding a microphone and keyboard, performed an absurd and catchy pop song — “I Have to Get My Car” — with nonsensical lyrics “written” entirely using a predictive text engine similar to the one that assists people when they’re composing text messages. The chorus of the song: “I have to get my car / I have to get my car / I have to get the car / and it will be nice to have.”

Though the idea of employing such complex computer-scientific concepts in service of something as intuitive as writing might seem eccentric to a typical literary audience, it’s a given at WordHack, a monthly event series that explores the intersection of language and technology. WordHack, which is hosted at the non-profit video game/art gallery Babycastles, was started in 2014 by Todd Anderson, a writer and educator who moved to New York after graduating from college in Minnesota. Anderson, who is 30 years old, had been participating in local poetry communities for years and had become increasingly interested in the possibilities that could arise from incorporating software into his poetic practice. He began by writing a simple script that would display his poems on the screen, line by line. At poetry readings, he would connect his computer to a projector and perform in this manner, allowing the audience to reflect back on the poem as it unfolded — something that’s easy enough to do while reading a poem, but nearly impossible to do while listening to one in a live setting.

“I found a lot of the things I was trying to do or that I thought were interesting in a poem wouldn’t come through on the first reading,” says Anderson, “and so I was trying to think of ways I could do a more complex thing with the text and still have audiences get it.”

Eventually, Anderson started to seek out likeminded writers — those who were composing not just with pens and word processors, but with computational scripts and algorithms.

“A lot of the people that are working with this stuff don’t identify as being someone who works with language and technology, even though they are,” says Anderson. “And they aren’t naming themselves and looking for each other.”

At least, that’s how it felt in 2014. Nearly five years later, WordHack, still hosted every month by Anderson, continues to provide a social and educational space for those who are interested in the intersection of technology and not only language, but all art forms. Each month’s event begins with an “open projector” format, the computational art world’s equivalent of an open mic — anyone can sign up to present their work for five minutes in front of the audience. Then, there are three or four scheduled speakers or performers, each of whom delivers a lengthier talk on almost anything, so long as it touches on both technology and art in some way.

“It’s just dedicated to being a big wide tent,” says Anderson. “People making games, people doing computational linguistic research, poets, artists…”

In terms of commercial interest and mainstream exposure, it’s not the literary world, but the visual art world, that has been the most (or at least the most recently) welcoming of computational practices. In October of 2018, a painting called Portrait of Edmond Belamy, which was apparently authored by an artificial intelligence, made a splash in the art world when it sold for $432,500 dollars at the British auction house Christie’s. The signature on the painting, a complex mathematical function involving logarithms and min/max functions, indicates to the viewer that the artist isn’t human.

Of course, on some level, the author is in fact human — or humans, to be precise. The French trio — Pierre Fautrel, Hugo Caselles-Dupré, and Gauthier Vernier, operating under the collective moniker Obvious — who technically created the painting did so using a cutting-edge artificial intelligence technique known as machine learning, in which a software system is “trained” on a large data set (in this case, a data set of historical portraits) and is able to output entirely new works in a similar style. Obvious, now quite famous in the art community, has continued to sell similar pieces of computer-generated art.

The sale of Portrait of Edmond Belamy was undoubtedly a success from a commercial perspective; the final sale price was nearly 45 times the initial estimates. And though it was the first AI-generated painting in history to go to auction, it certainly wasn’t the last. Earlier this year, the HG Contemporary gallery in Chelsea, perhaps responding to the financial interest in computationally generated paintings, presented Faceless Portraits Transcending Time, a full exhibit consisting of AI-generated paintings by Dr. Ahmed Elgammal, a professor of computer science at Rutgers University, and AICAN, an artificial intelligence that is being billed as both his creation and his artistic collaborator. According to the gallery, it is the first-ever solo gallery exhibit dedicated to an AI artist.

Despite the enormous financial success of the Christie’s sale, Portrait of Edmond Belamy and its creators have been the subject of a significant backlash among those in the wider art community who have been working with similar computational tools for years. One criticism of the piece is simply that it doesn’t seem to very aesthetically interesting.

“It’s not a bad image, but it was kind of just an average output,” says Tyler Hobbs, an Austin-based artist who uses programming languages (but not artificial intelligence systems) to generate paintings inspired by the great abstract expressionists — Mark Rothko, Franz Kline, Agnes Martin. “I’d already seen a lot [of artwork] that was really similar to that that had come before it, and it didn’t really say anything new on that front, and it didn’t have any particular emotional resonance.”

Zachary Kaplan, executive director of the independent digital-art organization Rhizome, has seen this sort of criticism applied to many such AI-based works.

“The dig on machine-learning-generated art has been that it basically just recapitulates existing aesthetics, existing ideas,” says Kaplan. As a counter-example, he points to the work of the Berlin-based conceptual artist Harm van den Dorpel, whose series Death Imitates Language (2016) begins with two images, and then uses machine learning to systematically generate new images by mixing elements of the existing images, in a sort of imitation of DNA being passed down over generations. The system has “bred” over 100,000 unique images to date.

“It’s not just a system creating something that spits out a recombobulation of something that’s put in,” says Kaplan, “but rather an artist continually pressing and working alongside and prodding the algorithm and the system to go beyond themselves, and that’s really an interesting frontier.”

The images in Death Imitates Language are certainly harder to categorize than those in Obvious’s oeuvre. They’re less representational, more abstract, more geometric; there’s no obvious predecessor. By contrast, Portrait of Edmond Belamy could be seen as precisely a “recombobulation” of the material making up the data set used to create it, i.e., canonical portraits made between the 14th and 20th centuries, also known as “old masters” paintings.

Hobbs cites Mario Klingemann, a prominent German AI artist, as another example of someone whose work defies this tendency toward the derivative. Incidentally, Klingemann’s digital installation Memories of Passersby I (2018) became the second AI artwork to go to auction at a major auction house (Sotheby’s this time).

“Some of the work that Mario makes, for example, tends to combine two or three different influences,” says Hobbs. “It’s not just purely trained on these ‘old masters’ paintings, but maybe he adds in additional steps that mix it with some other wildly different influence, and it’s kind of the combination of these things that produces really interesting output.”

The ire of the tech-art community towards the Christie’s sale isn’t just due to aesthetic mediocrity, however. It’s also rooted in the narrative that the sale has helped to perpetuate — a narrative that insists on the artistic agency of algorithms and machines. Nowhere is this idea more efficiently communicated than in that confusing signature in the corner of Portrait of Edmond Belamy.

“[The mathematical signature] makes absolutely no sense to me,” says Allison Parrish, the poet, programmer, and recurring WordHack presenter who also teaches at New York University’s Interactive Telecommunications Program. “It would be like a painter painting a painting of a paintbrush for the signature, right?” Parrish, who, unlike Hobbs, specifically uses artificial intelligence techniques to generate her work, doesn’t believe it makes any sense to give authorial credit to the machine in such cases.

The idea that an algorithm authored the portrait is particularly problematic in this case, because there’s reason to believe that one of the primary creative forces behind Portrait of Edmond Belamy — a 19-year-old AI artist named Robbie Barrat — has gone largely uncredited.

On March 24, 2018, Barrat posted a tweet juxtaposing the portrait with images generated by a machine learning system he created well before the Christie’s auction. The similarities are striking. “Does anyone else care about this?” wrote Barrat. “Am I crazy for thinking that they really just used my network and are selling the results?”

As it turns out, he wasn’t crazy; Obvious essentially used code that Barrat had uploaded to Github, a code repository service, to generate their work. There’s nothing illegal about this use of the code; Barrat had made it available under a free license — anyone could make use of it. Still, neither Obvious nor Christie’s were initially transparent about the use of borrowed code when the portrait went to auction, a decision that mired both parties in considerable controversy after the fact. “Maybe what [Obvious] did isn’t questionable, but maybe what Christie’s did is questionable,” says Hobbs. “I think they probably failed to do their research or fully inform the bidders about the nature of what was being auctioned.”

Regardless of the ethics of the matter, though, it remains clear that the long process that took the portrait from inception to auction was full of careful human consideration.

“[Obvious] made very, very particular decisions along every step of the process,” says Parrish, “from whose algorithm they used to the data set that they used, to the medium that they decided to put it on, to the way that they conducted the PR around it and stuff like that — it’s just drenched in intention. Maybe more human intention than most works of art, in fact.”

For Parrish and many others, algorithms are nothing more than another tool in a lineage of tools that have allowed artists to experiment with form and content.

“I don’t see the techniques that I use as being categorically different from any of the techniques that poets use to make words unusual,” she says. “Poets use free-writing as a technique, right? Just sit down for five minutes and write. That’s an exercise that poets use that’s not computational, but it’s dissociative — it’s supposed to take you away from your conscious thinking. And I view computation as a similar kind of tool.” Kaplan agrees, noting that the main difference between computing and other sorts of artistic tools is simply that the former isn’t all that intuitive or easy to learn, at least for some. “It is another tool — that tool has very specific barriers to entry, which is maybe a limiting factor right now.”

So why is it that Christie’s, HG Contemporary, and other institutions want us to believe that machines are suddenly the artistic equals of humans, destined one day to replace them?

“It’s a story about labor, right?” hints Parrish. “People who want us to take computational decision-making for granted are the same people that want us to think that computers can make paintings, and I think that that’s really dangerous. So for me, I don’t see any agency in the system that I make, and I don’t see any particular voice that belongs to it.”

There has been widespread institutional investment in artificial intelligence, and machine learning specifically, over the last decade. It is the technology that drives your Facebook feed, your Netflix queue, your Google search suggestions, and, though you might not own one just yet, your self-driving car. With the sale of Portrait of Edmond Belamy, it seems that the titans of the art world have followed suit.

In a blog post by Christie’s about the historic sale, titled “Is Artificial Intelligence Set to Become Art’s Next Medium?”, the auction house is unambiguous in stating that the painting “is not the product of a human mind. It was created by an artificial intelligence, an algorithm defined by that algebraic formula with its many parentheses.” But how accurate is this claim? After all, as we’ve already established, at least three human beings — aided by hundreds of years’ worth of paintings — guided that artificial intelligence toward its goal, just as large teams of software engineers at big tech companies create the systems that power self-driving cars.

“When Uber and Google have what are called self-driving cars — they’re not really self-driving,” says Nick Montfort, a computational poet and professor of digital media at the Massachusetts Institute of Technology. “They’re driven by massive stores of data.”

“When somebody puts on their painting or their novel or whatever and says this was written by such-and-so and Novel-Bot 7000 or whatever, I look sideways at that, because I think that it’s dishonest and it’s moving responsibility away from your own decision-making processes,” says Parrish. ” There’s no other entity out there that’s making those decisions for you. All of that is the culmination of humans making decisions.”

So, if anything, the Christie’s sale is indicative of a larger tension within the field of artificial intelligence — a tension between the relatively small groups of humans who create machine learning algorithms and machine-learning-based tools, and the massive data sets that are needed to drive those algorithms.

“[Big data sets] exist now in ways that they didn’t, but they are also under a particular corporate control,” says Montfort. “Being able to train Word2vec [a group of machine learning systems used to reconstruct the linguistic contexts of words] on the same amount of data that Google has access to cannot be done by anyone in the world outside of researchers at Google. Only they have that quantity of data.”

So, while anyone can pick up a laptop and learn to code and create works of generative art, not everyone has access to the kinds of data and the machine-intensive resources needed to push the envelope of artificial intelligence. In fact, almost no one has those things, except for the biggest technology companies in the world, many of which are built on the foundation of vast stores of their users’ data.

“That’s the scary thing about AI,” says Kaplan. “The risk is that we’re just a data set.”

Perhaps it is this tension between the endless creative possibilities of AI and the problematic power dynamics inherent in its data-fueled engine that has led so many computational artists towards very self-reflexive projects — projects that interrogate the very technology used to create them. For example, the artist Stephanie Dinkins’s project Not the Only One is, according to the artist’s website, “a voice-interactive AI designed, trained, and aligned with the needs and ideals of black and brown people who are drastically underrepresented in the tech sector.”

In a slightly different way, the media being put out by Botnik, such as Harry Potter and the Portrait of What Looked Like a Large Pile of Ash, an absurd computer-generated text trained on Harry Potter novels, seem mainly to point to the current limitations of machine learning technology. When Botnik’s founder and CEO, Jamie Brew, sang that ridiculous song at WordHack in March, the ensuing laughter was almost like a sigh of relief — a confirmation that machine learning does, at least for the time being, still require some old-fashioned human judgment, or what Tyler Hobbs calls “curation,” in order to produce lyrical content that is meaningful for humans. And it’s only in a communal setting like WordHack that such a moment could be had in the first place.

“I think a lot about computers and more specifically the internet as a printing-press-level shift for culture. And when you think about the printing press, there’s lots of business opportunities,” says Anderson, the WordHack founder. “But then there’s all kinds of new creative things that, if you can mass print your work, [there are] all these kinds of communities that are allowed by this.” Most of the work presented at WordHack will never make any money, let alone $400,000 like Portrait of Edmond Belamy. But for now, the spirit of artistic exploration is enough to sustain this particular community — and perhaps others that are further underground.

Chamberlain’s introduction to The Policeman’s Beard is Half Constructed goes on to answer its own question, to provide a justification for why we might want to use computer systems to generate writing that a human would never conceive of:

“Why? Simply because the output generated by such programming can be fascinating, humorous, even aesthetically pleasing. Prose is the formal communication of the writer’s experience, real and fancied. But, crazy as this may sound, suppose we remove that criterion: suppose we somehow arrange for the production of prose that is in no way contingent upon human experience. What would that be like? Indeed, can we even conceive of such a thing?”

--

--

Sam Moghadam

Technical Program Manager. Graduate of Columbia University’s dual degree program in Journalism and Computer Science.