My New Book: What Will AI Do to Us?

Jacob Ward
4 min readMay 18, 2018

--

Dubai claims that its robotic police officer, which debuted May 31, 2017, can read facial expressions and license plates. (Photo: Agence France-Presse/AFP)

In a few months I’ll begin work at Stanford University’s Center for Advanced Study in the Behavioral Sciences (CASBS) as a Berggruen Fellow. Before this year I wasn’t even clear what a fellowship was, and I was also utterly mystified as to how anyone could will a book into existence around the edges of life as a parent and professional. Now I’m suddenly about to receive the time, space, and resources necessary to sit down and do it.

The book is going to be about how artificial intelligence can amplify the best and worst human instincts. And I wanted to take a moment to explain here why I think it’s so important to explore that particular prospect.

I’ve been reading a great deal about artificial intelligence in the last year, and some fine books have been written on the subject. The Master Algorithm by Pedro Domingos is one of many enthusiastic books about the explosive business opportunity and social potential of artificial intelligence. Other writers have taken what I consider a subtler look at the subject. In Machines of Loving Grace, New York Times reporter John Markoff (a 2017 CASBS fellow himself) discusses the tension between researchers seeking to augment human abilities through technology, and those seeking to replace those abilities with technology. And books like Norbert Wiener’s seminal The Human Use of Human Beings issue prescient warnings about the ethical pitfalls of automation.

I’m hoping to find new ground by directly connecting artificial intelligence to the emerging field of behavioral and bias science, which seeks to understand the mental and social systems by which you and I make decisions. Researchers in that field have been making profound discoveries in the last half-century. And yet artificial-intelligence researchers don’t seem to know much about that field, or treat it as a means of guiding human behavior. The two camps are almost entirely removed from one another.

I’m generalizing wildly here — and when I start at CASBS I’m going to catch hell for this sort of thing—but here‘s my shorthand summary of the trends in human behavior that recent science has uncovered.

  1. Human decisions are not random. They are structured responses to stimuli, and as such, tend to follow certain rules.
  2. Culture and context are important influences on those decisions, but huge swaths of humanity make the same decisions in roughly the same way.
  3. We are social animals. Which is to say: human beings are deeply susceptible to influence from other human beings.
  4. We make some of our most important decisions unconsciously, without our active mental participation or consent.
  5. We rarely engage the most sophisticated, “human” portion of our mental apparatus, the one in charge of creativity, rationality, and occasionally counteracting the unconscious decision-making apparatus that drives the bus.
  6. The shortcuts, or biases, that we use to make efficient decisions are not just unconscious. Those biases, thanks to our very social and imitative nature, are also contagious.

To my mind, all of this makes our decision-making system very fertile soil for the seeds of automation that are being planted throughout society. The danger is not that some external artificial intelligence is going to enslave us all. The danger is that we are going to outsource our most difficult decisions to automated systems — the morally squishy, technically tedious, resource-intensive decisions, the really important stuff — and wind up disempowering the best part of ourselves. As UCSF Department of Psychiatry Professor Wendy Mendes told me recently, “our ability to make good decisions is like a muscle.” If we don’t exercise that muscle, and instead rely on a prosthesis, that muscle will shrivel away.

Not only that, whatever automated system we bring in to compensate for that lost muscle will often be riddled with bias, as researchers like Joy Buolamwini and Kate Crawford have warned us. Plus, the human tendency to trust whatever answer a computer spits out — a tendency that will only grow stronger as we rely more heavily on automated decisions — means we’ll be highly vulnerable, maybe even unconsciously subject, to these biases. And even when an AI system produces clean, bias-free results, the misuse of the resulting data can have secondary effects the designers never considered.

Kate Crawford’s talk at NIPS is a wonderful introduction to the ways bias worms its way into the systems we build.

I’m hoping that by directly connecting the vulnerabilities of the human mind with the AI projects that wittingly or unwittingly play on them, I can help articulate the stakes of this moment, and discover some strategies for reducing them.

If you have anything to share with me — a case study, a line of research, a critique of my whole concept — please get in touch. It’s an enormous and fast-moving subject, and I’ll need all the help I can get.

--

--

Jacob Ward

Technology correspondent for NBC News. Berggruen Fellow at Stanford’s CASBS program. Former editor-in-chief of Popular Science. http://www.jacobward.com