The Road to Smarter AI
Just in the past few years, AI has made great leaps and bounds. From detecting lung cancer to playing GO (and winning), it seems like AI can do it all. But in reality, most of these problems are only being solved by narrow AI: AI built to solve a specific problem.
And when I say specific, I mean specific.
For example, DQN (the deep learning algorithm that beat experts at Atari games) is extremely susceptible to small perturbations in its inputs, causing it to make huge mistakes over little differences undetectable by humans. The AI’s “understanding” is very shallow, and it can’t seem to differentiate between the important and the insignificant.
Problem: The bottom line is that AI can’t understand its data well: it can’t differentiate between causation and correlation, can’t deal with uncertainty very well(which is abundant in real life), and can’t adapt to new, unseen data as well. Especially in Natural Language Processing (NLP), understanding is key. Without extracting the meaning of what is being said, it is impossible to carry out an intellectual conversation: a big goal of AI in the future.
If AI is able to understand the meaning and context of its data, abstract definitions, generalizations, and even emotions, AI wouldn’t even be on the road anymore: it’d be in the sky.
Possible Solutions: The problem of understanding has always been a gargantuan task. How can a program made of 1s and 0s understand the meaning of language, its data, and abstract and complex concepts like freedom and thought? There have been many possible solutions proposed throughout the years, but there are three I’ll focus on today.
One solution promoted by Ben Goertzel, a leading member in Artificial General Intelligence (AGI is trying to fulfill the original goal of AI: to create an intelligent AI that essentially has all the understanding, creativity, and problem-solving capabilities a human has) is to combine what is known as symbolic and subsymbolic AI. Subsymbolic AI is a low-level AI that does not have much built-in structure and definition but instead builds its own patterns with raw data inputs. An example is a deep learning network that takes in raw data inputs (such as pixels) and builds patterns from that data. On the other hand, symbolic AI has built-in structures, definitions, and rules that closely follow the lines of human logic and abstraction. An example is the Deep Mind algorithm that beat chess champion, Kasparov. If these two AIs were integrated together, then they could combine the meaning, logic, and structure of symbolic AI along with the intense pattern finding and data crunching skills of subsymbolic AI.
“My own intuition is that the shortest path to AGI will be to use deep neural nets [subsymbolic AI] for what they’re best at and to hybridize them with more abstract AI methods like logic systems [symbolic AI], in order to handle more advanced aspects of human-like cognition.” — Ben Goertzel

While general intelligence does not specifically need a body, if we want to build something that can experience the emotions, thoughts, and perceptions of a human, shouldn’t it have one? The argument of embodiment or the “whole organism structure” is that to possess human-like cognition, AI needs to have a physical body. After all, as much as humans are minds, they are bodies as well. If AGI had a robot with which to interact with the world, it would be able to learn experientially, as we do.
The third and last solution to the understanding problem I’ll be talking about is as much neuroscience/connectomics related as AI-related. If humans are the only available example of intelligence, it seems as if our best bet is to just model the human brain. From mapping out every neuron and synapse in the brain (Human Connectome Project!!) to just implementing brain-like features in neural networks, the brain is a huge source of information and inspiration in our search for artificial intelligence.
Takeaways:
If AI is to really be considered intelligent, it needs to have the capacity to understand. This understanding problem is one of the biggest problems facing AI right now.
Three possible solutions are:
- A Merging of Subsymbolic and Symbolic AI
- Embodying AI in a Physical Body
- Modeling and Getting Inspiration from the Human Brain
Feel free to email me at kevn.wanf@gmail.com to further discuss this article (or about anything you’d like :)
