For Artificial Intelligence to Thrive, it Must Explain Itself

If it cannot, who will trust it?

The Economist
10 min readFeb 21, 2018

--

Photo: Besjunior/Getty Images

Science fiction is littered with examples of intelligent computers, from HAL 9000 in “2001: A Space Odyssey” to Eddie in “The Hitchhiker’s Guide to the Galaxy”. One thing such fictional machines have in common is a tendency to go wrong, to the detriment of the characters in the story. HAL murders most of the crew of a mission to Jupiter. Eddie obsesses about trivia, and thus puts the spacecraft he is in charge of in danger of destruction. In both cases, an attempt to build something useful and helpful has created a monster.

Real AI is nowhere near as advanced as its usual portrayal in fiction. It certainly lacks the apparently conscious motivation of the sci-fi stuff. But it does turn both hope and fear into matters for the present day, rather than an indeterminate future. And many worry that even today’s “AI-lite” has the capacity to morph into a monster. The fear is not so much of devices that stop obeying instructions and instead follow their own agenda, but rather of something that does what it is told (or, at least, attempts to do so), but does it in a way that is incomprehensible.

The reason for this fear is that deep-learning programs do their learning by rearranging their digital innards in response to patterns they spot in the…

--

--

The Economist

Insight and opinion on international news, politics, business, finance, science, technology, books and arts.