Listening to Machines

A healthy relationship with AI begins with respecting its otherness.

Carl Alviani
Protagonist Studio
8 min readJun 28, 2018

--

Image: Abbey St. John

How do you teach an AI to walk?

Generally speaking, you don’t. Artificial Intelligence, as we typically use the term right now, means a computational system that learns through pattern-spotting and self-correction, so you don’t so much teach it as create a setting in which it can teach itself. If you want an AI to walk, you provide a set of constraints — gravity exists, bodies are made of connected parts, the ground pushes back when you push on it — and give it a challenge, like moving a certain distance. Then you step back and let it learn, and often marvel at the results.

A recent paper entitled “The Surprising Creativity of Digital Evolution,” published by a conglomerate of European and North American researchers, is packed with technically correct AI-devised solutions to the locomotion problem that are also, by any traditional measure, wrong. There’s the AI that when asked to evolve a virtual robot able to move a certain distance, created a tower-like structure of requisite height and simply had it fall over. Eventually, it learned to sort of somersault upon landing, thereby “walking” a bit further. Yet other AIs solved problems by spotting and exploiting bugs in a system, like a video game that doesn’t count a death if the player kills an enemy in the process, prompting the AI to “win” by repeatedly committing suicide.

The Surprising Creativity of Digital Evolution”, fig 1

This is just a tiny subset of the delightful accidents that machine learning can make, but it illustrates something bigger about AI that most of us non-researchers are still only vaguely aware of: AI is “intelligent” in a way that looks nothing like human intelligence. We’re talking about machines, algorithms, and networks of data and code, after all, not squishy gray matter bathed in fluid and plagued by emotions. Human brains are the product of millions of years of evolution in a complicated social-physical world, so obviously we’re going to think differently than any digital entity, even one we designed.

As humans, we tend to expect human-like behavior from AI anyway. Crystal Rutland, the UX designer who organized the “Empathy and AI” talks at Design Week Portland this year, notes that human beings have an instinct for empathy, and automatically attach human-like qualities to just about everything we encounter. The more intelligent something appears, the more we tend to see ourselves in it.

This helps explain why so much science fiction over the years has depicted AI as something that’s almost human, except for a single quality, like an absence of morals (The Terminator) or an overly literal worldview (Commander Data from Star Trek: The Next Generation). This lets artificially intelligent characters serve as a foil to the humans around them, providing a cautionary example or casting our own flaws into sharp relief. The android Bishop does the latter in Aliens, with his calm pragmatism and self-sacrifice. However, all of these examples are grounded in the unspoken assumption that a sufficiently intelligent machine will communicate more or less like a human.

Conversation is Complicated

The reality is turning out to be different. For several years now we’ve had virtual assistants that combine voice interaction with some degree of artificial intelligence to perform helpful services through a conversational interface. But if you’ve ever used Amazon Alexa or Google Assistant, you already know how limited those “conversations” can be: confined to a small vocabulary, and bound by a semantic straightjacket that forbids the kind of multi-layered, wandering requests that we often take for granted in human discussion.

Simulating human conversation, in fact, turns out to be one of the hardest tasks facing AI experts. Amazon has put enormous effort into this problem over the last few years, most famously through the Alexa Prize, which offers large cash incentives to the world’s foremost AI researchers in pursuit of one of the Holy Grails of human-computer interaction: 15 minutes of small talk. So far, even the most successful efforts can only maintain the easy give-and-take of human discussion for a few minutes at a time, before being derailed by a non sequitur or a casual reference to a topic from a few seconds earlier.

Even the most advanced AI is still idiotic at certain things that humans do as a matter of habit. They don’t generally understand analogies, make leaps of insight, or synthesize unconnected information into more abstract concepts. They’re rarely able to consider context or environment, or to break a cognitive thread and pick it back up again. They’re not empathetic to humans, because they’re not humans. As a recent WIRED story on the Alexa Prize points out, “Twenty minutes of small talk with a computer isn’t just a moonshot, it’s a trip to Mars.”

Growing two meters tall and falling over is an obvious way to travel two meters…if you’ve never heard of walking.

But the qualities that make AI different from humans also make it uniquely insightful, in ways that have nothing to do with humanity. An intelligence that learns through rapid trial and error unencumbered by a lifetime of context is going to see what’s in front of it with exceptional clarity. Growing two meters tall and falling over is an obvious way to travel two meters…if you’ve never heard of walking. This kind of useful ignorance can shine a light on our own assumptions, and lead to extraordinary creativity. Properly directed, AIs have the potential to be world champion out-of-the-box thinkers. But are we ready to hear what they have to say?

Imagine What’s Yet to be Defined

The advent of smart speakers means there’s now a critical mass of people who talk to rudimentary AIs every day. According to most reviewers’ accounts (and personal experience), early interactions with them tend to alternate between useful responses, entertaining pleasantries, and a steady progression of dumb mistakes — unrequested music at random moments, and lots of “I’m sorry, I don’t know.” But eventually, regular users of smart speakers learn the routine: certain phrases that almost always work, the pared-down syntax of machine communication, and an acknowledgement that this isn’t a pleasant human in a sleek cylinder. They learn, in other words, to speak machine.

Those assistants are going to get a lot smarter, but they’re never going to be human. The rapid trial-and-error training process of machine learning allows AI to absorb immense amounts of information, and eventually handle tasks ranging from driving to medical diagnoses to financial trend spotting, with a facility unmatched by humans. This is certain. But it’s not going to make them experience the world like humans, or behave like them. And really, there’s little reason they should, except to make us more comfortable.

To what degree should we clothe new technology in the trappings of the old, in order to ease its acceptance?

At a certain point, the question of AI communication starts to feel like just another round of the long-running Skeuomorphism Debate: to what degree should we clothe new technology in the trappings of the old, in order to ease its acceptance? Early email interfaces used envelope icons; early graphical operating systems explicitly referenced a desktop; early automobiles resembled horse-drawn carriages (minus the horses) — not because it was efficient, but because it was recognizable. Eventually these references faded away, and today we have a Gmail interface that looks entirely digital, and is far more powerful and flexible for not relying on the analogy of paper letters.

Our desire to talk to AIs like we talk to people could be thought of as a version of that tendency, and we’ll eventually shed that too. When you imagine AI interfaces of the future, don’t think of talking to a pseudo-human. Imagine something else: a human-machine language that we haven’t yet defined.

Respecting Otherness

This idea isn’t new. Cyborg anthropologist Amber Case points out that the concept of “non-human allies” has been used to describe technology for years, often comparing it with another intelligence that we habitually anthropomorphize: dogs.

Dog owners love to talk about their pets as if they were people; successful dog trainers are quick to point out how misguided this is. Read practically any guide to canine discipline or behavior, and the central message is invariably one of respect for canine otherness. Dogs are intelligent, emotional, and responsive, but they aren’t people, and the first step to getting along with them is learning to treat them like dogs. Common techniques like using firm, repeated commands when communicating, and establishing yourself as a “pack leader” all arise from this concept.

Computers aren’t dogs, but the underlying idea of respecting an alternate form of intelligence is similarly helpful. Dogs can be trained to do useful things like sniffing for explosives or guiding blind owners through busy streets, but they remain dogs. AI is already navigating us through busy streets too, as well as helping us process complex data, and improving web search results, but it remains AI.

When we view AI as something like a person, we expect it to have human morals and insights, and to see the world in similar ways as we do.

When we view AI as something like a person, we expect it to have human morals and insights, and to see the world in similar ways as we do. This can blind us to its potential uses as well as its misuses.

AI algorithms have helped spread fake news on Facebook, and pushed inflammatory videos on unsuspecting YouTube viewers. An AI-enabled car hit and killed a woman in Arizona recently, and AI-powered autonomous weapons are already being tested by multiple governments. None of these are inherently evil, just as a hunting dog who bristles and chases after a squirrel isn’t evil; both are doing what they’ve evolved to do, with some influence from the humans who trained them. In the case of AI, though, humans are also the ones creating the environment in which they evolve.

So we have several tasks before us. We need to take extreme care in how we train these artificially intelligent beings, by providing them with information and environments that lead them toward empathy and instill in them values that align with humanity’s. But we also need to accept their limitations, and ours. We can’t build other humans; only nature can do that. But we can build something wonderful and constructive, as long as we’re willing to talk—and listen—to it as something separate from ourselves.

Originally published at www.designweekportland.com.

--

--

Carl Alviani
Protagonist Studio

Writer and UX strategist. Founder of Protagonist Studio. Obsessed with design’s hidden consequences. Living in Glasgow, with my heart in the PacNW.