What is AGI?

Peter Voss
Intuition Machine
Published in
4 min readFeb 21, 2017

Let’s start at the beginning. Why do we even need this term?

60 years ago when the term ‘AI’ was coined, the ambition was to build machines that can learn and reason like humans. Over several decades of trying and failing (badly), the original vision was largely abandoned. Nowadays almost all AI work relates to narrow, domain-specific, human-designed capabilities. Powerful as these current applications may be, they are limited to their specific target domain, and have very narrow (if any) adaptation or interactive learning ability. Most computer scientists graduating after the mid 80’s only know AI from that much watered-down perspective.

However, just after 2000 several of us felt that hardware, software, and cognitive theory had advanced sufficiently to rekindle the original dream. At that time we found about a dozen people actively doing research in this area, and willing to contribute to a book to share ideas and approaches. After some deliberation, three of us (Shane Legg, Ben Goertzel and myself) decided that ‘Artificial General Intelligence’, or AGI, best described our shared approach. We felt that we wanted to give our community a distinctive identity, to differentiate our work from mainstream AI which is unlikely to lead to general intelligence.

The term ‘AGI’ gave a name to this emerging group of researchers, scientists, and engineers who were actually getting back to trying to develop ‘real AI’. This ‘movement’ was officially launched with the publication of the book Artificial General Intelligence , and has since gathered momentum with additional publications and annual AGI conferences. By now, the term has become quite widely used to refer to machines with human, or super-human level capabilities.

Some people have suggested using ‘AGI’ for any work that is generally in the area of autonomous learning, ‘model-free’, adaptive, unsupervised or some such approach or methodology. I don’t think this is justified, as many clearly narrow AI projects use such methods. One can certainly assert that some approach or technology will likely help achieve AGI, but I think it is reasonable to judge projects by whether they are explicitly on a path (however far away it may be) to achieving the grand vision: a single system that can learn incrementally, reason abstractly, and act effectively over a wide range of domains — just like humans can.

Elsewhere I’ve elaborated on what human intelligence entails; here I want to take a slightly different angle and ask “What would it take for us to say we’ve achieved AGI?”. This is my proposed descriptive definition, followed by some elaboration:

A computer system that matches or exceeds the real time cognitive (not physical) abilities of a smart, well-educated human.

Cognitive abilities include, but are not limited to: holding productive conversations; learning new commercial and scientific domains in real time through reading, coaching, experimentation, etc.; applying existing knowledge and skills to new domains. For example, learning new professional skills, a new language (including computer languages), or even novel games.

Acceptable limitations include: very limited sense acuity and dexterity.

Alternative suggestions, and their merits

“Machines that can learn to do any job that humans currently do” — I think this fits quite well, except that it seems unnecessarily ambitious. Machines that can do most jobs, especially mentally challenging ones would get us to our overall goal of having machines that can help us solve difficult problems like ageing, energy, pollution, and help us think through political and moral issues, etc. Naturally, they would also help to build machines that will handle remaining jobs we want to automate.

“Machines that pass the Turing Test” — The current Turing Test asks too much (potentially having to dumb itself down to fool judges that it is human), and too little (limited conversation time). A much better test would be to see if the AI can learn a broad range of new complex human-level cognitive skills via autonomous learning and coaching.

“Machines that are self-aware/ can learn autonomously/ do autonomous reduction/ etc.” — These definition grossly underspecify AGI. One could build narrow systems that have these characteristics (and probably have already), but are nowhere near AGI (and may not be on the path at all).

“A machine with the ability to learn from its experience and to work with insufficient knowledge and resources.” — Important requirements but lacking specification of the level skill one expects. Again, systems already exist that have these qualities but are nowhere near AGI.

Some objections

Why specify AGI in terms of human abilities? — While we’d expect AGI cognition to be quite different (instant access to Internet, photographic memory, logical thinking, etc.), the goal is still to free us from most work. In order to do that it must be able to operate in our environment, and learn interactively via natural language and human interaction.

Why not require full sense acuity, dexterity, and embodiment? — I think that a reasonable relaxation of requirements is to initially exclude tasks that require high dexterity & sense acuity. The reason is that initial focus should be on cognitive ability — ie. a “Helen Hawking” (Helen Keller/ Stephen Hawking)” The core problem is building the brain, the intelligence engine. It can’t be totally disconnected from the world, but its senses/ actuators do not need to be very elaborate, as long as it can operate other machines (tool use).

Peter Voss is founder of SmartAction and CEO of AGI Innovations Inc

Please like and share — if you like :)

--

--