The True Challenge of Generalized AI
Spoiler Alert: It’s not engineering
Some weeks ago, we marveled at how a big-ass computer could beat a European champion at Go. Motoki Wu, NLP Researcher at Foxtype, however, looked at it from a different angle.
He jokes, but there’s a lot of truth here. This supercomputer might have been able to beat a formidable human opponent through intense number crunching (and smart segmentation strategies, à la Monte-Carlo search), but so what? I can put on pants. Can a computer put on pants? Nope. Does the ability to put on pants (or be bilingual) make me more formidable than a pro Go player?
The answer is an emphatic no, no it doesn’t. Because that Go player can put on pants too.
People were amazed that the Go barrier was broken. Because Go has 10^(10^117) possible permutations, this was once thought intractable by normal computing methods—as a “grand challenge” because the planet would die off before all the possible moves could be calculated. This was a challenge not because of its sophistication but due to the sheer lack of processing power to achieve the number of computations necessary.
With AlphaGo, what Google did was devise a clever way to mimic the many moves of many players until a computer could satisfactorily and dependably play the moves of a pro-level Go player. It did so by smartly limiting the field of possible actions based on the inferred movement patterns of real players.
In other words: Go wasn’t brute-forced as a computer might be expected to do. Humans at Google created a system in which a computer could minimize the calculations necessary through learning patterns from the best human players playing the game.
But is this intelligence? It depends who you ask…
It can be argued that this is how humans learn to play the game: through watching and learning from people better than yourself, seeing how they might behave, and continuous trial and error. People were excited that computers could watch and “learn” from many more games than a human might at a much faster pace.
The other side would argue that this is clever engineering—that the computer didn’t really “learn” anything. As any serious gambler will tell you, as long as a game involves odds and variables, the system can be exploited and manipulated to produce a desired outcome. AlphaGo is a (clever) Go-calculator that just takes input and delivers output.
Which brings me back to…
Personally, I will argue that it doesn’t matter how good a system gets at “learning”, it doesn’t really add up to intelligence. To illustrate my point, I will source Matt Might’s brilliant short: The Illustrated guide to a Ph.D
Then you get a masters and specialize in your field. Then you research…
And focus and push and keep driving your specialized field of study…
For the sake of argument, let’s assume that we have the ability to teach computers at the level of a Ph.D. Anything we wanted. Using Might’s model, this is how it might go:
No matter how many things you teach a computer system, or how good at something the system becomes, there is currently no way to fill in the giant knowledge gaps that occur between the verticals of knowledge. The most accurate mathematical mind inside an inescapable bubble, for all intents and purposes, is useless.
But more importantly, what good is intelligence if it does not know how to synthesize different and disparate concepts and ideas into a coherent system— one that can be applied to existing areas or create new ones. If I can throw, swing a bat, and run, I can play baseball. (I might have even been able to invent baseball.) It might be years—decades—until an AI can perform these tasks individually, so how long until it can do them simultaneously?
“We kind of know the ingredients; we just don’t know the recipe,” he says. “It might take a while.” — Yann LeCun
What makes human knowledge and intelligence unique is that we are able to understand how to do completely new things through the knowledge of separate, loosely-related activities. We can learn how to write better by reading. We can learn how to think by speaking different languages. We can learn dozens, if not hundreds, of life lessons through playing sports. And as good as we are at doing it naturally and without conscious effort, we’re not even close to understanding how this process works. We know this to be true because we have a critical educational problem in the US. While there are many theories on how learning works, none of them are concrete, nor are any even remotely close to being codified in any programming paradigm (not even LISP).
If we can only create machines that are great at specific tasks, what we are currently creating in our AI labs are savants, not polymaths.
Funny enough, savants are thought to be the product of a damaged left anterior temporal lobe, an area of the brain key in processing sensory input, recognizing objects and forming visual memories. These tasks are what current computer AIs are great at. Does this imply that the artificial intelligence we create is just a small portion of a damaged brain?
Developing a general AI isn’t getting a computer to do increasingly complex tasks. Any closed system with set rules, given enough evidence and data, can be generalized into a set of algorithms. With lots of practice and funding, we might even be able to extend and generalize specific algorithms to do other closely-related tasks. But the fundamental notion that learning how to do lots of tasks is “intelligence” is absolutely incorrect and fundamentally flawed. Creating systems to do more and more things, even when done elegantly and extensibly, is not the creation of intelligence.
The true challenge of generalized AI is going to be solved somewhere in the humanities. Before we can teach robots how to think intelligently (as a human might), we must understand crucial parts of human condition: what it means to learn; how learning works; and what it means to be human. This is just the ante required to sit at the table. Otherwise, we are only building robots that can do more and more complex parlor tricks. We simply can’t solve for x if we cannot turn x into a closed system that we can understand (see Moravec’s paradox).
Rest assured, we are nowhere near the dawn of SkyNet.
Tell me how wrong I am on Twitter: @adailyventure