Isn’t AGI just “moar” AI?
Many decades ago, the quest for artificial intelligence (AI) started with a grand vision of building a system with human level intelligence. That turned out to be hard, very hard. Why?
- computational capabilities were too limiting and that limited the types of algorithms that could be developed and tested,
- available algorithms had mathematical difficulties scaling up, and
- large amount of training data that the algorithms needed was expensive to curate.
For the most part, these difficulties beat the dream of human level intelligence out of the community’s conciousness. The audaciousness of dreaming big was replaced by tinkering and perfecting toy algorithms on toy data sets, solving toy problems, and “narrow AI” (what we call “AI” today) was born.
Even the recent success of Deep Neural Networks is a continuation of the same trend. There are some deep neural network based attempts at building a single model that can apply to multiple problem domains, but in some ways that feels like trying to cross an ocean using a car by attempting to make the car faster and lighter.
If I may elaborate on the analogy, imagine a civilization that has two solutions specific to two transportation problem, cars on land and boats on water. Now say they need to build something to travel across continents that involves traveling large distances over land and water.
They could build this Frankenstein contraption— but they would quickly realize that this is not the way to go.
A “serial integration” system would be better. They would travel by car to the nearest port, hop into a boat to cross the ocean and then on the other side ride a car to the desination. This uses the existing solutions, i.e. cars and boats, serially to solve a larger problem. This is a good practical approach. In fact, this type of transport system is widely used for cargo transport.
This solution would be akin to having specific narrow AI solutions for say, object recognition and speech synthesis, and then using those to build a system that describes a scene by looking at it, which logically involves recognizing objects and producing speech. Most of today’s approaches to solving complex AI problems are this type of integration of existing solutions.
Continuing with the analogy, airplanes are objectively a better way to travel across continents and no combination of cars and boats produces airplanes. Building airplanes needs fresh ground-up thinking.
AGI is not just more AI. I contend that it cannot be solved with “more layers” — I include all deep learning hacks (with respect) into that moniker along with the act of combining deep learning approaches into a larger system. Just like building the first airplane started with building a toy airplane model and not by building high performance parts like engines and ALS system, the path towards building AGI would go through dumb/simpleminded AGI, not highly capable pieces of AI.
As AGI researchers, let’s figure out where to start —
- What capabilities must an AGI system have? e.g. memory, working at various spatial and temporal scales, noise tolerance, planning, self hyper-parameter selection, attention, processing pleasure/pain signals from the environment, etc. I expect a healthy debate to come up with this list.
- What are good test problems for AGI to attempt? Essentially, what is the AGI counterpart for “image classification on MNIST”? Think “dumber” intelligence instead of parts of intelligence. Look at earlier on the evolutionary tree. Look at agents in simple game environments. If the simplest test problem is still too complex, look at testing each capability from the list above.
- How would we evaluate performance on AGI tasks? For the MNIST problem, classification accuracy is a great metric that gives researchers a clear direction for improving their algorithms. Can we come up with similar metrics for AGI? Specifically, what are sets of tasks that are approximately at the same ‘level’ of AGI capabilities that an AGI system must perform well on to qualify to be AGI of that ‘level’?