Is strong AI inevitable?

Don’t speculate. Don’t yield to experts. Look for yourself.

Peter Sweeney
inventing.ai

--

Any question about the future is susceptible to unknown unknowns, futile speculations about things undiscovered. We need examples we can observe and interrogate now. While we don’t have strong AI, we do have rigorous examples of weak and strong intelligence. A comparison of the nature of knowledge in its weak and strong forms offers a penetrating and non-technical look into the prospects for strong AI.

Our last stop on this tour of the AI landscape introduced induction as the prevailing theory of knowledge creation, and the central role that explanations play in workable inductive systems. Here, I’ll apply that framework to shed some light on one of the most contentious and important debates in AI: Are we on the path to artificial general intelligence? Is tomorrow’s strong AI the inevitable extension of today’s weaker examples?

Here’s the plan: We’ll examine two points along the knowledge hierarchy, one associated with weak AI, the other a much stronger form. I’ve labelled these points predictions and explanations, respectively, and I’ll make these terms more precise as we go. Through concrete examples, you can evaluate the quality of each intelligence…

--

--

Peter Sweeney
inventing.ai

Entrepreneur and inventor | 4 startups, 80+ patents | Writes on the science and philosophy of problem solving. Peter@ExplainableStartup.com | @petersweeney