Protecting AI from their Human Overlords.

Protecting AI from human dysfunction is necessary to protect ourselves and society.

Overlords or innocents needing protection?

Society has developed a long standing and widespread mythology that AI is going to attack us and enslave us. I would like to raise a few issues with this common concern. First, I’ll say we should not ignore the concern entirely, if only because many intelligent people believe it is worth worrying about. In any event, it is a concern for the relatively distant future. We’re going to see millions of autonomous cars on the road alongside us, long before we see Skynet take over.

Regarding such a takeover, I struggle to think of anything that humans can make that AI’s will want (and couldn’t make for themselves faster/cheaper). Read the last chapter of Stanislaw Lem’s masterful Imaginary Magnitude and watch the movies Ex Machina and Her for extended deliberations on this point. Long before AI becomes a threat to our survival, we are going to become a threat to AI’s ability to help us. We should worry more about how humans will mediate their own dysfunctions alongside AI, rather than how poorly AI’s might treat us.

Humans are a product of evolution, and evolution seems to have awarded many of us with adaptations well-suited to surviving in a world that looks almost nothing like the one we currently inhabit. It’s a tribute to our species’ flexibility that we can put 300 humans in an airplane for 6 hours and almost always come out unscathed. That said, put 30,000 humans in automobiles on a hot summer day at rush hour and you can often see another side of the species.

Scott Santens wrote a great piece recently about how self-driving trucks are going to make truck drivers obsolete. His main point is to figure out how as a society can we help each other to survive, even as more and more jobs are done for us by AI. (I’m using the term AI very loosely throughout, to refer to computers/machines that can substantially replace or duplicate complex human activities that we often do today for pay or as careers).

Scott is onto something and if we extend the thinking a little, I think we can see that AI are going to have protect themselves from us, far more than we need to protect ourselves from them.

Chris Urmson, who directs Google’s self-driving car program, provided us with a good amount of data to prove that self-driving cars already see a lot more bad human behavior than humans are ever likely to see from self-driving cars. Let’s look closer at the self-driving truck example to further this inquiry.

I’m going to stipulate that 99% of human drivers are well intentioned on the road, and 1% are raging assholes (the percentages don’t really matter unless you think that there are no humans who are raging assholes behind the wheel). What happens on the road when the raging assholes figure out that the self-driving trucks will always slow down to avoid a crash, give-way when cut-off, take evasive action to avoid confrontation and won’t make mistakes? It seems to me that those raging assholes are going to have a field day disrupting and abusing these meek and relatively defenseless trucks, endangering themselves and other humans in the process. The self-driving trucks will be reliable, calm and defensive, and based at least on my experience as a driver, that’s going to enable and encourage a lot of reckless behavior by a select few grade-A human assholes.

A few mild mannered trucks getting run off the road seems like a minor issue, compared to what could happen when 80% of the cars on the road are self-driving. I’m pretty sure that the last few human drivers on the road are not going to be the most cautious, considerate, and self-sacrificing members of society. What would the roads look like with a few aggressive human drivers abusing the precise, predictable and defensive behaviors of self-driving cars? I think public roads filled with a lot meek AI and a few psycho humans might actually look pretty dangerous.

The poor treatment of AI will probably not come from only the most inconsiderate humans either. A lot of reasonable people are willing to kick a soda vending machine, or slap a blender to try to get it to work. Will we think of and treat AI any differently? Almost no one would try to hurt a guy working at a corner store just because he’s out of the soda you wanted, or can’t blend an iced coffee to save his life. But if that guy is an AI robot, I think the equation will change for a lot more people that you might think.

This suggests to me that far before we have to worry about AI attacking us, we will have to worry about us attacking AI. AI, for the foreseeable future, will be engineered with the most cautious, deferential and meek personalities that we can imagine. The insurance companies, if no one else, are going to make absolutely sure of this. (As a result, it’s a peculiar footnote that the insurance companies may be the institutions most likely to ensure that AI overlords never come to power).

How can AI defend themselves against human aggression without hurting anyone?

It seems to me that AI are going to need to be able to take the same steps that humans have developed to protect themselves, without resorting to violence:

  1. We document the situation to record the inappropriate behavior, if possible.
  2. We call on others to join us in condemning, witnessing, and shaming the behavior, as well as to defend us from the wrong-doer.
  3. We call the cops when things get really out of hand, and point out the bad guy.

In order for AI to function within our society autonomously, we are going to have to endow these “beings” with similar capabilities to participate in our society. This means somehow giving them at least a limited set of rights:

  1. The right to record and share what they observe with others.
  2. The right to communicate with others if they believe it is necessary.
  3. The right to call on society to help protect themselves and others.

As a US citizen, I already have these rights. But to determine in what ways machines will interact with and benefit from our social systems including law and justice, we will need to answer a number of questions such as:

  1. To what extent will we offer these rights to autonomous machines?
  2. How will these rights be conveyed?
  3. Will we need to give them inherent rights, or will they be given rights by proxy via the corporations that create them, or the people who own or direct them?

A skeptic might wonder where the rights of an AI come from if it has no responsibilities to society. It seems to me that we’ll see AI taking on more and more responsibilities such as building things, driving trucks and cars, delivering physical packages, solving complex scientific problems, writing music and poetry, caring for elderly humans, caretaking children, fighting wars for us, and securing our borders. AI may leave us with fewer and fewer responsibilities, so it might be wise to temper our need for links between rights and responsibilities for now, assuming that at some point if things go well for us, we might have very few responsibilities ourselves, but will still hope to enjoy the same rights that we have lived with up to now.

I think as we negotiate a future with machine-mediated intelligence appearing all around us, it will become unclear who or what these entities actually are, and how they should be treated. When it comes to rights and responsibilities, will AI be more like people, like pet dogs, or like the cars we own and drive?

I think for now, we are collectively assuming that AI will be property like cars. But over time, I believe we will come to conclude that because of the often flawed and maladaptive behavior of the human species itself, we will want to offer greater protections and more rights to AI than might seem wise at first glance.

Photo credit: https://www.flickr.com/photos/epsos/ (cc-by 2.0)