Human-level AI is possible

Peter Voss
3 min readJan 10, 2017

--

Many people fret about AI growing too smart, and becoming an existential threat — a position loaded with its own controversies.

Then there are those highly skeptical that AI can ever achieve human-like intelligence, or Artificial General Intelligence (AGI).

I’ll present some evidence for the plausibility of AGI from two directions: Firstly that it seems inherently possible, and secondly by providing examples of what has already been achieved. Finally I’ll briefly address some of the open questions and unsolved problems in AGI.

Most scientifically inclined people can well imagine that it’s possible to build a robot with the cognitive ability of a lower-level animal — say an ant, a bee, or a mouse. For my argument it doesn’t really matter at what level one feels comfortable with this thought experiment.

Now from an evolutionary perspective, all of these biological brains have common ancestors, and importantly, brains improved continuously, without any ‘here-a-miracle-happened’ discontinuities.

More obviously, one can trace continuous brain improvements during development — from early fetus to adult.

Now if one grants that the cognitive functionality of the simpler brains can be replicated in a robot or computer, then it seems obvious that there is nothing in principle that would prevent ongoing improvements in capacity and function to eventually match those of the most sophisticated brains. Ours.

Of course it is possible that at some point the chosen technology (say programmable silicon chips) will run out of steam, or that we won’t be smart enough to figure out how to program some essential enhancement.

How far along are we towards human-level AGI? A complex question.

On the one hand we have impressive AI systems that already best human intelligence in some impressive domains such as Jeopardy, Go, theorem proving, as well as speech and image recognition in specific situations.

On the other, we come across some incredibly dumb ‘artificial intelligence’ systems. Examples include, frustrating automated self-service, ‘personal assistants’ that don’t remember or learn , and bumbling experimental robots. Very few commercial AI systems today meet the basic requirements of intelligence.

While current commercial systems are greatly lacking in the general intelligence department, many, if not most of the features and requirements of replicating the full range of human cognition have already been demonstrated in various proof-of-concept prototypes. Unfortunately, at this stage very few people are focused on developing AGI.

What key problems have not yet been solved, or convincingly demonstrated?

There certainly is clear agreement in the AI community as to what should be included on this list. Here are a few candidates frequently mentioned by researchers:

While I believe that all of these items already have workable solutions, several have not yet been convincingly demonstrated in real-world contexts. Key to solving the problem is having the right theory and approach.

Peter Voss is founder of SmartAction and CEO of AGI Innovations Inc

Please like and share — if you like :)

--

--