Humans May Not Always Grasp Why AIs Act. Don’t Panic

Humans are inscrutable too. Existing rules and regulations can apply to artificial intelligence

The Economist
4 min readFeb 16, 2018
Photo: PhonlamaiPhoto/Getty Images

There is an old joke among pilots that says the ideal flight crew is a computer, a pilot and a dog. The computer’s job is to fly the plane. The pilot is there to feed the dog. And the dog’s job is to bite the pilot if he tries to touch the computer.

There is a snag, though. Machine learning works by giving computers the ability to train themselves, which adapts their programming to the task at hand. People struggle to understand exactly how those self-written programs do what they do (see article). When algorithms are handling trivial tasks, such as playing chess or recommending a film to watch, this “black box” problem can be safely ignored. When they are deciding who gets a loan, whether to grant parole or how to steer a car through a crowded city, it is potentially harmful. And when things go wrong — as, even with the best system, they inevitably will — then customers, regulators and the courts will want to know why.

For some people this is a reason to hold back AI. France’s digital-economy minister, Mounir Mahjoubi, has said that the government should not use any algorithm whose decisions cannot be explained. But that is an overreaction…

--

--

The Economist

Insight and opinion on international news, politics, business, finance, science, technology, books and arts.