Can Computers Have Common sense? Polanyi’s Paradox, Judgment and Artificial Intelligence
The path for building intelligent systems is full of fascinating challenges. Many of those challenges are related to our own ability to create intelligent programs that resemble human decisions. Common sense, intuition, judgment are some of the mystical human cognitive ability that seem impossible to replicate in artificial intelligence(AI) systems.
What is judgment and commons sense anyway? There are plenty of verbose and often contradictory definition about those human cognitive skills but most of them agree that they relate to human’s ability to reason beyond data and calculations. As disciplines such as behavioral economics or cognitive psychology explain, we regularly make decisions based on factors other than statistics. Every year, Amazon’s business book section is packed with titles from business leaders explaining how to make decisions based on “gut feeling”. Malcolm Gladwell’s Blink is a fascinating book about this subject. If we make decisions based on common sense, intuition and judgment, can we then “simulate” those skills in AI systems. The short answer is yet but don’t stop reading just yet ;)
One of my favorite ways to think about the mismatch between human reasoning and AI learning algorithms is explained by what is known as Polanyi’s Paradox. British-Hungarian mathematician Michael Polanyi regularly studied the causes behind our ability to acquire knowledge that we can’t quite explain. Just try to explain to a kid step by step how to ride a bike and you will see what I mean. You clearly know how to do it but there is no easy way to explain it. Polanyi’s Paradox summarizes this cognitive phenomenon explaining that many times “we know more than we can tell”.
As you can imagine, Polanyi’s Paradox has deep implications in the AI field. After all, if we can’t explain our knowledge how can we possibly train AI agents?
Fortunately, AI has evolved passed Polanyi’s Paradox and we can thank Google for that. For decades, GO was considered the poster child for Polanyi’s theory. Many of the well accepted strategies in the ancient game are very hard to mode as a series of rules and atr typically more related with human’s intuition. As everybody knows, between 2016 and 2017, DeepMind’s AlphaGo program regularly defeated the world’s top GO players before graciously retiring a few months ago ( whatever that means for an AI program ;) ).
AlphaGo broke passed the Polanyi Paradox using very clever AI techniques. Instead of relaying on traditional symbolist’s algorithms such as inverse deduction to teach AlphaGo the rules of clever Go strategies., the DeepMind team used a combination of deep learning and reinforcement learning to train AlphaGo on super-complex strategies. Initially, AlphaGo studied millions of Go games and try to infer hidden patterns between a specific strategy and the outcome of the game. After that, researchers have AlphaGo to play numerous games against itself building new knowledge.
What Does that Take Us?
The lessons from AlphaGo show that the way to build human-type judgment into AI systems is by architecting systems that learn on their own and including judgment-based decisions in the training data. More about this in a future post…