One problem to explain why AI works

A framework for understanding AI and keeping up with it all.

Peter Sweeney
inventing.ai
Published in
20 min readMay 9, 2018

--

Ask your resident experts, Why does AI work? Readily, they’ll explain How it works, methods emptying in a mesmerizing jargonfall of gradient descent. But why? Why will an expensive and inscrutable machine create the knowledge I need to solve my problem? A glossary of technical terms, an architectural drawing, or a binder full of credentials will do little to insulate you from the fallout if you can’t stand up and explain Why.

The purpose of AI is to create machines that create good knowledge. Just as a theory of flight is essential to the success of flying machines, a theory of knowledge is essential to AI. And a theoretical basis for understanding AI has greater reach and explanatory power than the applied or technical discussions that dominate this subject.

As we’ll discover, there’s a deep problem at the center of the AI landscape. Two opposing perspectives on the problem give a simple yet far-reaching account of why AI works, the magnitude of the achievement, and where it might be headed.

Part 1: Induction as the prevailing theory

Many overlook the question because it’s obvious how knowledge is created: We learn from observation. This is called…

--

--

Peter Sweeney
inventing.ai

Entrepreneur and inventor | 4 startups, 80+ patents | Writes on the science and philosophy of problem solving. Peter@ExplainableStartup.com | @petersweeney