Killer Robots and the Moral Dilemma of Automation (2 of 2)

How will we teach robots to understand our values? Maybe by reading them stories.

Jacob Ward
Aug 31, 2017 · 8 min read
A robot waiter in Chengdu, China (Photo: Reuters/Stringer)

Listen to this story

--:--

--:--

The series “Guidance Systems” discusses technologies that seem to improve our lives by offering us new choices, while in fact shaping or removing our ability to decide things for ourselves.

The automation of fundamental human tasks — cooking, driving, killing—is well underway. The US military is aggressively pursuing automation in almost every part of its operations, from the Air Force’s Loyal Wingman program, which seeks to integrate human pilots with autonomous aircraft, to the Office of Naval Research’s Science of Autonomy program, which pursues, among other goals, reduced “manning and other communications requirements.”

The difficulty is that automating these tasks isn’t just a question of programming a set of instructions. It’s also a question of programming a set of values. It’s clear that with enough time and money we can do the former. It’s not clear that we know how to do the latter. And at the moment the United States is not a signatory to any sort of treaty governing the use of automated weapons.

That doesn’t mean the ethics of these systems aren’t being discussed. As early as 2008, an ONR-funded paper from CalPoly about the ethics of military automation worried that robot combatants would lower “the threshold for entering conflicts and wars, since fewer US military lives would then be at stake.”

But the deeper problem is that we’re hoping to create robotic surrogates for ourselves when we don’t fundamentally understand who we are. Economists, psychologists, neuroscientists are all coming to grips with the revelation that reality as you and I experience it is not fundamentally real. It’s a mishmash of unconscious assumptions, inaccurate memories and external stimuli all arriving in staggered fashion like planes into an airport. And although we consistently fall prey to the same cognitive illusions (I’m the host of a television show that will profile those illusions in 2018) we’re also prone to imagining that we’re reasonable, logical, fair-minded human beings.

But let’s set that aside for a moment. Let’s imagine that we do understand our decision-making process and the values that inform it. How on earth would we automate those things? If we’re going to send robots out onto our highways and battlefields to make our decisions for us, how will we explain to a robot how to do things the way we’d want it to?

Think about it this way. Imagine trying to explain to someone who has never set foot inside a restaurant how the whole thing works.

“Well, first you’ll walk in, and there will be tables everywhere. You may or may not be able to see the kitchen. Regardless, don’t sit in the kitchen. Sit at a table. But don’t sit at a table unless it’s obviously clean, and has a knife and fork set out for you. Although some restaurants don’t do that — the table might be blank. Also, wait at the entrance for a few minutes to see if someone at the restaurant wants you to sit somewhere in particular. Okay, now sit down.”

Mark Riedl is trying to explain this sort of thing. And not just restaurant protocol. He’s trying to teach all forms of human behavior to robots. But every rule of human interaction has a subtle exception, and the branches of the decision tree bloom endlessly. “At some restaurants, we wait in line,” Riedl says. “But in other cultures the rules about waiting in line are different, and people cut the line.”

There’s no manual of human interaction, Riedl sighs. “If you want to learn the rules of society, then you have to go learn it somewhere.”

Riedl began his academic career in 2008 trying to create computer games that could rewrite their own plots as the story progressed. “I had an advisor at North Carolina State who was trying to use AI to manipulate computer games. He wanted to get out of the conventional plot, like if you wanted to suddenly join the bad guys. But to do that you had to build a story generator.” That’s where a lot of research had dead-ended. Computers could string together pre-written plot points in random order, but they didn’t know how to assemble a plot that would resonate with our human expectations.

And this is where Riedl discovered a singularly human source of knowledge: our stories. “Our ability to pick up a story and absorb its lessons is one of our great talents,” he says. So he began working on the idea of using stories to teach robots what to expect from human society. In, say, a restaurant.

Red’s Eats in Wiscasset, Maine—a roboticist’s nightmare (Photo: Creative Commons)

“For one of my story-understanding systems,” Riedl says, “we did the math, and the branching choices number in the thousands. And yet a restaurant is a small, rigid, agreed-upon human interaction.”

In 2016, Riedl and his colleague Brent Harrison, working together at the Georgia Institute of Technology, published a paper entitled “Learning from Stories: Using Crowdsourced Narratives to Train Virtual Agents.” In it, they introduced the world to Quixote, a piece of software that could hear stories from humans and distill teachable rules from them. In this case, they tried to teach Quixote the rough rules that govern a bank robbery.

Their heist scenario involved three characters: a bank teller, a robber, and a police officer. Each of them had only a handful of available actions, from pushing the alarm button to brandishing a gun, and yet the branching choices numbered in the millions. At first Quixote had no idea how to write the plot of the holdup. It was just stringing plot points together for the easiest possible transaction. The cop would simply stand aside so the robber could leave unhindered, for instance, or the robber might take the money and then hang around the lobby endlessly, waiting for the cops. So Harrison and Riedl recruited people online to write short descriptions of a typical bank robbery in simple English that Quixote could read.

Soon Quixote was scripting bank robberies that a Hollywood screenwriter would recognize: the thief pulls a gun, the teller hits the alarm, here come the cops, the chase is on. It worked. And in the process it proved Riedl and Harrison’s concept: that stories could teach robots what humans typically do.

Their work is largely funded by the military: the Office of Naval Research and DARPA, both of which are deeply invested in the idea of robotic helpers, swarming drones under the command of a human, and all manner of other artificially intelligent units. The military as a whole has committed billions of dollars to these projects.

The grant officers from ONR explained their interest in Riedl’s work to him in two ways. First, they had an interest in making it easier for unskilled humans to program them. “Let’s say they want to build a social simulation of a town in a foreign culture,” Riedl says “but all the subject-matter experts are not programmers. How do you teach the program the details of the place, the characters, the farmers in the field, what they’ll all do when you show up?”

The other military interest is in getting along with robot systems. “We tend to assume the robot will act like a human, and when it doesn’t, we’re surprised.” In a future where a robot is carrying soldiers’ belongings, or driving them around, the soldiers need to be able to anticipate what the robot will do next. And if they can’t, our human tendency to anthropomorphize anything that looks vaguely human gets us into trouble. We shouldn’t expect the robots to behave in human ways, and yet we do. In one announcement of his work with Harrison, Riedl told an interviewer “we believe story comprehension in robots can eliminate psychotic-appearing behavior.” Telling human stories to robots, in other words, can help them act more human.

But back to the deeper problem faced by Riedl and Harrison and everyone else trying to teach human decision-making to robots: we don’t know quite why we make the decisions we do.

We know who we want to be. The moral, upstanding, reasonable, creative, fair-minded version of ourselves. But the last few decades of research into the processes of our minds has shown us that we tend to be a whole other kind of person.

And while most human societies have some sort of mechanism for judging the rightness of our actions, that mechanism only kicks in after the action.

If a child suddenly leaps into the path of an oncoming truck, the truck driver is faced with an instantaneous and impossible set of branching decisions. Plow ahead, presumably killing the child? Pull the wheel one direction, steering the truck into oncoming traffic, where countless others could be maimed or killed? Or pull the wheel the other direction, taking the truck off a cliff?

Once that driver has made the decision and the horror has ended, the ambulances have carried away the injured and dead, and the police report has been filed, someone has to evaluate the choice he made.

“It was instinct,” the driver keeps saying. “I don’t know why I decided to do what I did.” So the investigation looks at questions of sobriety, at the mechanical integrity of the truck, at the parents of the child. And at the end of it all, the mechanism arrives at some sort of conclusion that determines fault and restitution and the policies that might prevent this sort of tragedy from taking place again.

With autonomous systems, that kind of evaluation has to take place ahead of time. An autonomous vehicle will only do what it has been programmed to do. It must be taught to go ahead and plow through a chicken in the road, but veer away from a child. It must be pre-programmed to choose the cliff rather than oncoming traffic.

So how will enough stories teach a robotic system to make a perfect decision every time? “We don’t write our values in a logical, coherent way,” Riedl says. This is the great challenge of his and Harrison’s work. It’s trying to teach values to robots that we humans don’t yet understand. “We’re asking autonomous systems to be perfect, and yet we tolerate errors in humans.” He thinks for a moment. “I don’t have an answer for that.”

Thanks to John Battelle

Jacob Ward

Written by

Technology correspondent for NBC News. Berggruen Fellow at Stanford’s CASBS program. Former editor-in-chief of Popular Science. http://www.jacobward.com

Guidance Systems
Guidance Systems
Guidance Systems

About this Collection

Guidance Systems

We just barely understand the human mind and human behavior, and yet we’re building technologies and businesses that shape our lives in dramatic and fundamental ways. Military robots that have already taken the ethics of war out of human hands. Addiction specialists who are building the neuroscience of habit into apps. Children’s television producers who are trying to use their shows to build better values into their young audience. These are guidance systems, and this series reveals the powerful, occasionally beneficial, and often shortsighted ways in which they’re making our choices for us. Produced in partnership with shift.newco.co

We just barely understand the human mind and human behavior, and yet we’re building technologies and businesses that shape our lives in dramatic and fundamental ways. Military robots that have already taken the ethics of war out of human hands. Addiction specialists who are building the neuroscience of habit into apps. Children’s television producers who are trying to use their shows to build better values into their young audience. These are guidance systems, and this series reveals the powerful, occasionally beneficial, and often shortsighted ways in which they’re making our choices for us. Produced in partnership with shift.newco.co

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade