Some of us are born in orbit
How permeable is the boundary between human and machine? Every year, our devices get a little smarter, a little more responsive, a little more attuned to our needs. Perhaps you use Siri or Alexa; maybe you clean your house with a Roomba or have read about medical robots being used in hospitals. What happens when, thanks to technological advances, our computers acquire intelligence to rival our own? And how will we know?
These questions were anticipated nearly 70 years ago by the computer scientist, mathematician, and cryptographer Alan Turing. In 1950, he proposed the Turing Test — a test of “machine intelligence,” the term used prior to the formation of the field of artificial intelligence (AI). The test employs an imitation game, in which a judge asks questions of several interlocutors and must determine whether or not they are human, based on written communications. If the judge can’t tell from the responses if they are talking to a machine or a human, the machine has passed the test. Some consider the Turing Test to be more than just a measure of a computer’s ability to think; it is often interpreted as a test of consciousness.
Franny Choi’s poem “Turing Test,” from the cyborg-themed collection Death by Sex Machine, is framed as the transcript of a conversation between an interviewer and (presumably) an AI. The most unsettling thing about the speaker’s responses in the poem is how, though the strings of words and associations are clearly not in human syntax, there seems to be a real center of emotion beneath the words, a pulsing heart of pain that we can’t help but identify with. Each response to the interviewer’s questions is simultaneously clear and garbled. We know the words individually, but they aren’t quite aligned in ways that make sense. It begins with a simple exchange between an interviewer and the unnamed speaker:
// this is a test to determine if you have consciousness
// do you understand what i am saying
in a bright room / on a bright screen / i watched every mouth / duck duck roll / i learned to speak / from puppets & smoke / orange worms twisted / into the army’s alphabet / i caught the letters / as they fell from my mother’s mouth / whirlpool / sword / wolf / i circled countable nouns / in my father’s science papers / sodium bicarbonate / NBCn1 / amino acid / we stayed up / practiced saying / girl / girl / girl / girl / til our mouths grew soft / yes / i can speak / your language / i broke in / that horse / myself //
This passage reads like the literary equivalent of a magic eye puzzle. At first glance, the words swirl and tangle, meaningless. But step back and look again, and there is meaning to be extracted here; there is a real response beneath the randomly selected phrases. It’s almost as if the speaker is choosing phrases that aren’t exactly right, but that approximate an answer that a human might give. The speaker learned words from their mother, and from reading their father’s science papers, by practicing speaking the same word over and over, just as you and I did. The answer to the original question is almost mischievous in its phrasing. Do they understand? Yes, they respond, “i broke in / that horse / myself.”
We are getting closer to this reality. Machine learning, a subset of artificial intelligence, is concerned with developing algorithms that give computers the ability to “learn.” It works by essentially coding computers to think like humans by recognizing patterns and drawing on the bottomless resource of the internet. The computer system that determines pattern recognition is called a neural network, a name that directly references the neurons in the human brain. Our neurons, in turn, are specialized cells whose function is to transmit electrical impulses that act as code for information. At the level of our brains, are the lines between human and machine already beginning to blur?
// where did you come from
man comes / & puts his hands on artifacts / in order to contemplate lineage / you start with what you know / hands, hair, bones, sweat / then move toward what you know / you are not / animal, monster, alien, bitch / but some of us are born in orbit / so learn / to commune with miles of darkness / patterns of dead gods / & quiet / o quiet like / you wouldn’t believe //
Since it was proposed, the Turing Test has been both influential and controversial. Critics argue that it isn’t a true test of intelligence or consciousness because it can be passed using deceptive tactics. In 2014, a chatbot called Eugene Goostman passed the Turing Test by simulating a 13-year-old Ukranian boy. But while many news outlets trumpeted the achievement as a milestone, others pointed out that Eugene’s success was due to its exploitation of the Turing Test’s weaknesses. Because the bot posed as a young person with a foreign nationality (the test was held in the United Kingdom), errors in syntax and understanding were overlooked. There is still a long way to go before we arrive at an AI that is truly “intelligent,” a word that encompasses a diverse suite of human traits.
// how old are you
my memory goes back 26 years / 23 if you don’t count the first few / though by all accounts i was there / i ate & moved & even spoke / i suppose i existed before that / as scrap or stone / metal cooking in the earth / the fish my mother ate / my grandfather’s cigarettes / i suppose i have always been here / drinking the same water / falling from the sky / then floating / back up & down again / i suppose i am something like a salmon / climbing up the river / to let myself fall away in soft, red spheres / & then rotting //
What’s so far missing from the artificial intelligence equation is emotion and creativity. No matter how sophisticated our programmed neural networks are, there isn’t yet a way to turn these aspects of human social cognition — the way we respond to and with other humans — into mathematical code. Accordingly, another test was developed to assess creativity as a proxy for intelligence. The Lovelace Test, named after the nineteenth-century mathematician Ada Lovelace (a woman!), can only be passed with the creation of something creative and original, something that cannot be explained by looking at the code. This could be, for example, a machine-created piece of music or an original story. This test is considered to be more accurate than the Turing Test — and perhaps impossible to pass. At least so far.
In the last two lines of Choi’s poem, the interviewer, seemingly frustrated, asks a final question. The speaker doesn’t give a straight answer, but the simple response belies depths of what seems to be raw emotion:
// do you believe you have consciousness
sometimes / when the sidewalk opens my knee / i think / please / please let me remember this //
The reader is left where we began. Do we believe that the speaker has consciousness? That they are human? The way in which they answered the Turing test questions gave away their otherness, but did it invalidate their humanity? These responses are playful, insightful, and finally, poignant. And what is it to be truly human if not this crystalline pain, this longing for others to see you as worthwhile, to exist for someone else, to truly matter? So far, our technological advances have fallen short of creating a machine that can emulate the deepest human traits — empathy, creativity, social awareness. If we ever get to that point, it will be time to take a long, hard look at our own humanity, at our own concept of self. Please, we beg, every moment of every day. Please, please let me remember this.
Thanks to Franny Choi for generously giving us permission to reprint sections from “Turing Test” in this piece. To learn more about Franny and her work, visit frannychoi.com.