Bots: A definition and some historical threads

Allison Parrish
Data & Society: Points
6 min readFeb 24, 2016

I am a poet and artist, and I make Twitter bots. The term “bot” encompasses many different kinds of software agents, from conversation simulators like Eliza, to programs that write stories about sports events without human intervention, to automatically created social media accounts that spam hashtags. The bots I make have an artistic and literary bent: for example, I made @everyword — which tweeted every word in the English language in alphabetical order over the course of seven years — and more recently @the_ephemerides, which tweets computer-generated poetry juxtaposed with NASA imagery. I’m part of a community of bot-making artists (loosely known as #botALLY) who are taking the canvas of social media and covering it with computer-generated writing and other kinds of generative art.

As part of Sam Woolley’s provocateur-in-residence workshop at Data & Society, I was asked to write a provocation regarding automated agents and bots from my perspective as a poet and artist. In response, I offer the following explanation of what, in my view, a bot is. After all, what in this world is more provocative than a definition?

I call my model for defining bots the “PUDG” model. According to this model, bots are:

  • Procedural
  • Uncreative
  • Data-driven
  • Graffiti

Bots are procedural

A bot’s content is automatically generated without human intervention, using a set of predetermined rules and procedures. In this sense, bots are among the latest development in a literary practice that stretches back at least to John Clark’s “Eureka Machine,” created in 1845. The Eureka Machine was not a general-purpose computer, but instead a cabinet-sized purpose-built machine that produced randomly-generated poetry (in particular, Latin hexameter verse). Examples of the use of procedure in 20th- and 21st-century literary arts include Jackson Mac Low’s asymmetries, Charles Hartman’s “Sentences,” and Nick Montfort’s ppg256, a sequence of short Perl programs that generate poetry.

THE EUREKA. Illustration from The Illustrated London News, July 1845.

Spammers, of course, use procedural writing because it makes it possible to create millions and millions of tweets (or blog comments, or e-mails, or text messages, etc.) without expending any human labor, beyond programming the procedure. Poets often use procedural writing methods because they produce unexpected turns of phrase: juxtapositions of words and concepts that would never occur to a writer using only their inspiration and intuition.

A Twitter bot that exemplifies the procedural nature of bots is Liam Cooke’s @poem_exe, a Twitter bot that produces ingenious, collage haiku (and haiku-like poems) by randomly juxtaposing lines from an existing corpus of haiku. These poems are sometimes humorous or nonsensical, but more often than not, they’re simply beautiful: peaceful, serene, evocative.

Bots are uncreative

I’m using “uncreative” here in the sense coined by Kenneth Goldsmith, and otherwise referred to as “unoriginal genius” by critic Marjorie Perloff, to mean writing that concerns itself with categorizing, remixing, and re-enacting pre-existing textual artifacts. Uncreative writing is produced not through “inspiration,” but by approaching an existing text and re-reading it, drawing out something new and unexpected in the process. One example of non-bot uncreative writing is Caroline Bergvall’s Via, which collects and juxtaposes 48 different English translations of the opening tercet of Dante’s Divine Comedy. The piece highlights not the text, but the decisions made in the act of translation itself.

Darius Kazemi’s @VeryOldTweets is a fine example of an “uncreative” Twitter bot. The bot retweets tweets from the ancient (now nearly prehistoric) era of Twitter — 2006 — and in the process brings to light how much the genre of the “tweet” has changed from the earliest days of the service. Joe Fox’s @AndromedaBot is likewise uncreative, taking as its “corpus” an extraordinarily high-resolution photograph of the Andromeda galaxy recently released by NASA, and “working through” the corpus by simply tweeting smaller, randomly cropped portions of the image.

Bots are data-driven

By “data” here I mean “lots and lots of data.” Bots today operate on much more data and use much more sophisticated statistical techniques than were available to 20th-century writers that used procedural techniques. Jonathan Harris and Sep Kamvar’s We Feel Fine, produced in 2005, is an early and canonical example of data-driven art: the project harvested in real time blog posts and status updates from across the web, categorizing them according to demographics, and made them available through a web-based visualization interface.

Contemporary botmakers regularly make use of large corpora, and in particular the corpus of Twitter itself. Ranjit Bhatnagar’s celebrated @pentametron finds tweets written (inadvertently) in iambic pentameter, and juxtaposes them with others to create an endless stream of rhyming couplets. Ivy Baumgarten’s @AndNowImagine finds tweets that contain the word “imagine” in an imperative context and randomly arranges them in humorous combinations (“Imagine getting drunk on the moon. Now imagine the delinquency we could perpetrate if we really put our minds to it”).

Bots are graffiti

The characteristic of bots that distinguishes them most readily from other kinds of poetry and art is that they are interventions in a public space (to the extent that, e.g., Twitter can be defined as a public space). In this sense, bots are a kind of graffiti, and botmakers have a lot in common with other artists that work with language in public spaces, like Jenny Holzer (whose projected aphorisms on the sides of buildings are often perfectly tweet-length) and Shelley Jackson (whose projects “Snow” and “Skin” operate by putting words on unusual surfaces and in unusual locations).

Casey Kolderup’s @wowwwrude Twitter bot is especially graffiti-like: if you follow the bot it will occasionally and randomly reply to your tweets with the phrase “wow rude.” Katie Rose Pipkin’s @tiny_star_field is another kind of graffiti: it tweets visual arrangements of star-like Unicode characters, temporarily turning your Twitter feed into a view of a clear night sky.

As procedural, uncreative, data-driven graffiti, bots have the ability to discover and then bring to light unusual and unconventional combinations of words and ideas. Because of this, it’s easy to conclude that bots, like other kinds of art in their ilk, are inherently politically progressive: bots present alternative ideas about how the world could be, as opposed to the way it currently is.

But each one of the four characteristics above can act in service of conservatism as well. Having a platform for publishing endless procedural (and often nonsensical) text is itself a kind of privilege, and can be dangerous when there are so many who have something important to say but are drowned out by lack of access, or by noise that is difficult to filter. Textual appropriation can be a powerful tool of political and cultural critique, but can also be used to silence and render invisible voices that are already marginalized. Data-driven art can expose inequality (as is the case with Simon Lawrence and Michele Lent Hirsch’s excellent @stopandfrisk bot), but data harvesting in the form of government and corporate surveillance can stifle dissent. Unexpected interventions in public space, even when attempted with good intentions, can inadvertently make those spaces unsafe or inhospitable to others.

In his essay “Bots should punch up,” Leonard Richardson says that “you can’t say absolutely anything and expect ‘That wasn’t me, it was the dummy!’ to get you out of trouble.” The appearance of procedurality tends to dilute the sense of authorial responsibility, which can be dangerous. It’s important to remember that automated agents always reflect and carry out the will of their creators, and this holds true regardless of how you define what a bot is.

Points/talking bots: Bots: A definition and some historical threads” is an output of a weeklong workshop at Data & Society that was led by “Provocateur-in-Residence” Sam Woolley and brought together a group of experts to get a better grip on the questions that bots raise. More posts from workshop participants talking bots:

--

--

Allison Parrish
Data & Society: Points

Frenzied, innumerable, included in the present classification. Co-creator of Rewordable and full-time faculty at NYU’s Interactive Telecommunications Program.