Hi, I’ll be your waitress tonight. I’m a robot! (ChinaFotoPress / Getty Images)

Jerry Kaplan Has Written The Scariest AI Book Yet

Will a robot marry your daughter? Should you tip a robot? Can you rape a robot? Just ask the author of “Humans Need Not Apply.”

Jerry Kaplan, a long-time entrepreneur and Fellow at the Stanford Center for Legal Informatics, has weighed in on the Great Robot Controversy with a book called Humans Need Not Apply. Kaplan doesn’t believe that robots will be conscious anytime soon, or maybe ever. But he thinks it doesn’t matter. Even though robots and artificial intelligence systems may not become conscious, he argues we will wind up treating them as if they are.

As they become more cognitively dexterous, and assume more and more tasks once reserved for humans, he contends, robots will have legal rights and responsibilities: they will be able to own property, and may be punished for malfeasance. We will give them agency.

According to Kaplan, we really don’t have a choice in the matter. Robots (and the definition is broad enough to include AI-based distributed software systems) will increasingly be granted autonomy to conduct more and more actions without supervision. But robots don’t respect the social and moral niceties that humans are socialized to observe — unless humans design systems with those values built in. Since there’s no way to police how people design robots, we need to be able to control robot misbehavior in the wild.

In Humans Need Not Apply, Kaplan envisions a happy future for humans, with robots doing both the literal and figurative heavy lifting. But if we don’t get that design-in-morality part right, things aren’t so sunny. In his words:

These machines may offer us unprecedented leisure and freedom as they take over our hard and unpleasant work. But they are also likely to be our stewards. . . The problem is that we may get only one shot at designing those systems to serve our interests — there may not be an opportunity for do-overs.

Over breakfast one recent morning, Kaplan and I discussed the progress of AI, the insinuation of robots in the workplace, and whether raping a robot might ever become a crime. He was wearing a suit, ready for a full day of television book promotion. As he became animated on various topics, his eyes bulged under his neatly coiffed grey mane, evocative of an aging rockabilly singer. We also laughed a lot, despite the seriousness of our topic. (Or maybe because the robot future he outlines, while plausible, has an element of the absurd to it, which he openly acknowledges.) Early in our conversation, he threw down a cudgel: “I think we’re headed for increasing trouble,” he said. “If we don’t take some kind of policy actions, things that pretty bad right now are going to potentially get a lot worse.”

The interview is edited for clarity and length.

In your book you say we will grant robots moral authority — they will be held responsible for their actions. You also say that they will be able to own property and even inherit property and eventually operate in a way where they are out of our control. How do you make that leap?

You have a kid, right? When you have a kid, you think, obviously, you’re in charge. You’re not in charge. You sleep when your kid sleeps, you feed the kid when the kid is hungry. You’re a slave to your kid. The perception of who is in charge is in your head. [The same goes with robots.] So there are two ways to frame this. One of them is that robots are independent and you have to watch out for them. The more valuable point of view is that we’re just designing them wrong because we can’t control them. It’s an engineering problem.

I’m actually working on a project on this at Stanford. I don’t want robots pushing ladies off the sidewalk as they’re moving, that’s bad. And so that’s a design problem. The sidewalk isn’t designed for robots. We need to program the robot so that it would obey social conventions and give priority to people, and are able to deal with moral challenges. Not too many people in the field of robot building are thinking about or worrying about this issue.

Think of it in terms of people and animals. Animals will take actions independent of their owners and you have a certain level of responsibility to control that animal but it is not as absolute as you might think. Your dog can go bite somebody and your liability is limited to certain kinds of things. They actually have a legal term for this now, it’s called the first bite theory. Once it has bitten somebody, now you are liable if it takes a second bite.

Still I’m trying to picture the legal change that would allow something else you predict, a robot to own property.

Well now I’m off into the stratosphere but it’s fun to think about.

A couple of times in this book, in discussing how we might grant robots some agency, you cite a precedent I’m not comfortable with — slave owning.

Oh you don’t like that?

Well, you are citing slave law not as something to be avoided at all cost, but a system that we might learn from in dealing with robots.

I respectfully disagree. Obviously we are not bringing back slavery.

Maybe you’re saying we’re going to be the slaves?

Let’s go through this. Slaves were not considered full human beings or people and there was a way that was dealt with in the law. The same thing can happen with robots, that’s all I’m saying. So that’s a model for what we can talk about with robots. It’s not unreasonable.

Now in the book, when I go off the deep end at the end, which I was advised to do, saying that 50 to 100 years —

You’re saying you were advised to go wild? You believe this stuff, right?

I’m projecting things that are 50 or 100 years in the future. I have enough intellectual integrity that I would defend most of the things in the book… a few of them I have changed my views on, as I’ve learned more.

Well by the end of your book, you’re pretty much saying we will have robot overlords — call them “mechanical minders.”

It is plausible that certain things can [happen]… the consequences are very real. Allowing robots to own assets has severe consequences and I stand by that and I will back it up. Do I have the thing about your daughter marrying a robot in there?

No.

That’s a different book. [Kaplan has a sequel ready.] I’m out in the far future here, but it’s plausible that people will have a different attitude about these things because it’s very difficult to not have an emotional reaction to these things. As they become more a part of our lives people may very well start to inappropriately imbue them with certain points of view.

You don’t have robots marrying your daughter in this book, but you do write about robot sex.

The question is, is that a sex toy or is there somebody there?

Sherry Turkle believes that in our interactions with robots there’s something immoral about tricking people into thinking that there’s someone there, even in something less creepy than sex, like elder care robots.

Jerry Kaplan. Photo by Todd Rafalovich.

I have read her books, so I get where she’s coming from. She is saying that it’s wrong to fool people. To me, if you’re not fooling people, it’s okay. Here’s a machine that’s going to pretend to love you? Go for it, have fun. You want to love it back? That’s your business. You understand what it is. But if you think this robot really loves you, that’s bad because someone must have designed it to highjack your emotional life, so that you behave in a way which is not in your right interest.

Maybe your thought process would be, “Yeah I know it’s a robot but I love her anyway.”

I don’t mind that. Like, what does it matter whether they are made out of meat or whether they are made out of [machined parts]? That’s a meaningless distinction.

You predict that in 50 years people won’t even talk about whether robots think or have consciousness. It won’t matter because for all practical purposes we’ll treat them as if that’s the case.

It’s like the way we think of horses or something. If you want to treat it with respect, that’s your business. But we’re not evolutionarily programmed to necessarily deal with this in the right way.

Do you think people could be prosecuted for more than destroying property if they hurt a robot?

You’ve got an interesting point. Can you rape a robot? That’s a great title for a piece in the Atlantic!

I’m definitely going to use it.

You can get it to print faster than I can. …

So answer the question: Can you rape a robot? We outlaw sex with animals.

I’m coming down on the no side. You can rape somebody else’s robot — but you can’t rape your own robot. The point is I’m damaging your property.

You’re saying robots can have property, they can have a will —

You’re pushing me in a direction I’m not comfortable with.

You wrote it!

Let’s be careful. If a robot can own property, that has certain consequences which are very significant. We should be aware of that, that’s important. That’s my point.

But it’s a concept you introduce. Before I read your book, I hadn’t thought about the concept of robots owning property.

But it’s important.

Are you saying it’s a good idea?

No I’m saying we will need to restrict this. In some way.

Who is saying robots should own property?

There will be pressure to do so in the same way that corporations [own property].

Pressure from whom, the robots?

No, the people who own the robots. One way to limit your liability is to push it down to a corporate entity or a robot. And that’s what we do today. You can’t have a robot that has no responsibility for its actions. Somebody has to be responsible. People will say a robot needs to be able to own assets because I gotta have something to go after. Society is not going to agree that anything a robot does is perfectly fine. So he’s got to have assets. But there is a difference between having a corporation own assets and having a robot own assets. A robot can do things with those assets that we may not be happy with. We’re okay with a corporation doing things, because people are benefitting from the actions. But a robot can get loose and disconnected from its economic owners, and run roughshod.

[At this point Kaplan wants a refill of his tea and is trying to get the attention of the restaurant staff.]

See this lady over here? She’s a waitress. Pretty soon she could be replaced by a robot.

You think a robot would do a better job?

Absolutely. No question. And at much lower cost. The robot is always paying attention. It’s the same reason that the car is a better driver than the drivers are. So can she be replaced by a robot? Absolutely. We are fairly close.

Should you tip a robot?

Why do you tip people? You tip them for their service, you tip them out of a humanitarian sense of helping to provide for another human being. Would you tip a robot? My answer is no.

If I didn’t tip, the robot might shoot a laser at me. Or hack my credit records.

In my next book, I ask if are we going to have to bribe robots to do the work. Because that’s really possible. In that sense, you might tip a robot.

I would over tip a robot.

Well, if you came in here everyday, if the robot is busy it should serve you first if it knows it will get a bigger tip. So I’m walking back my answer. It depends on the circumstances. I wouldn’t tip a robot for humanitarian reasons. I would tip it for service reasons.

Do you agree with the letter signed by Elon Musk and Stephen Hawking that asked for limits of robotic research?

I wrote an opinion piece and gave it to the Times, titled “Why I Didn’t Sign That Letter.

Stephen Hawking is a smart guy, shouldn’t we listen to him?

He’s a physicist, what does he know about this stuff? No offense, but it’s like Dr. Seuss talking about foreign policy. Let’s at least be open to the possibility that he is wrong or maybe he’s a little misguided.

What in that letter do you disagree with?

It’s overkill to say we should stop all research. They said we should ban autonomous weapons that are not under meaningful human control. It sounds good, but [the issue is] way more subtle.

When you look at the future, do you think that AI will be a plus or minus for humanity?

It is a very powerful technology. It has not only the possibility but the strong likelihood of having very important and positive economic effects. We have the ability and a responsibility to make sure that it’s channeled toward social and intellectual goals. The Silicon Valley view is, there is nothing to worry about — it’s only good, there’s no bad. That’s not true. We need to be thoughtful about the policies we put into place and we will need lots of controls. In order to get the value out of this without dehumanizing us and separating us and increasing inequality, lots of issues need to get solved, some of which are moral and ethical and some of which are just social. We need to be talking about that. We kind of got stuck on, “The robots are coming to kill us.”

Follow Backchannel: Twitter | Facebook

Will robots have agency? Will we be happy within a robot ecosystem? Will a human ever be (1) charged with raping a robot, (2) walking a daughter down the aisle towards a robot husband, or (3) paying rent to a robot landlord? And what would YOU tip a robot waiter? Continue the conversation by responding below.

Jerry Kaplan is the author of Humans Need Not Apply: A Guide to Wealth and Work in the Age of Artificial Intelligence.

Like what you read? Give Steven Levy a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.