AI will outpace us (but that’s ok)

Peter Voss
3 min readJan 14, 2017

--

Human intelligence has not increased in 2000 years. Aristotle, Michelangelo, and Newton would be on par with today’s most brilliant minds — given our current knowledge-base and tools.

Knowledge is not the same as intelligence. Wikipedia has tons of knowledge, but no intelligence. A young child can have little knowledge but be highly intelligent. Intelligence allows you to acquire and effective use knowledge.

Of course, our practical effectiveness in solving problems has increased immeasurably, but the intelligence of individuals by themselves has not. Either evolution hasn’t had enough time, or raw intelligence is not a primary driver. More than likely, something like EQ (emotional intelligence) has been more important.

To assess the relative pace at which AI and individual human core intelligence are likely to increase we’ll need to look at positive and negative forces affecting future improvements — both in AI, and in humans.

AI is now growing exponentially. Driven by Moore’s Law on the hardware side, massive expansion of available training data, plus wide-ranging improvements in software, an increasing number of applications are closing in on, or even exceeding human-level capabilities. There is every reason to believe that this trend will continue, and probably even accelerate.

However, will this growth lead to general human-level intelligence, or AGI? Here things are not quite so clear. I have argued that the current mainstream approach will not (directly) lead to AGI. On the other hand I see strong evidence that human-level (and beyond) AGI is quite possible. The main impediment today seems to be that too few people are focusing on that goal.

Provided that one starts with a workable, comprehensive theory of AGI, a good case can be made that all functionality required for AGI can largely be added incrementally — both in terms of scope, and of capacity.

Individual human core intelligence (i.e. not knowledge!) is incredibly hard to increase to any significant degree. While pedagogical methods can greatly sharpen one’s ability in a particular field, general intelligence is not very amenable to improvement. Nootropics and other brain stimulation can move the needle a bit, but significant boosts, i.e. in the order we’d expect from AGI, will only be possible via something like brain implants or genetic engineering.

Wetware improvements face numerous hurdles that AGI does not face:

  • No design blueprints are available
  • Inscrutable, evolution-driven design (versus engineering approach)
  • No debugging aids built into brain. Very difficult to analyze
  • The brain as laden with additional complexity for biologically required epigenesis, metabolism, error-correction, redundancy, and self-repair
  • Already highly optimized by evolution
  • Extreme evolution-driven multi-purpose functional integration
  • Brains can’t be backed up, restored, re-booted, or hacked experimentally
  • Experimentation, training and testing can’t be sped up or repeated millions of times
  • Biological brains have no simple speed or capacity upgrade option
  • Knowledge and skills can’t be imported at gigabit speeds, or copied
  • They cannot easily interface with other databases, brains or computers
  • Biological surgery/ intervention healing times
  • No easy path to integrate high-level maths, statistics, or logic functionality
  • No easy way to effectively control mood, emotion, and meta-cognition
  • Very expensive and time consuming to replicate
  • Requires bio- and nano-tech not likely to be available for a long time

Actually I only needed to mention one deal-breaking hurdle: The FDA

In summary, improving computer-based intelligence will be significantly easier and faster than trying to hack human brains. Naturally, this conclusion is predicated on the assumption that a workable AGI design is available.

What are implications for human employment and life? Some thoughts on that are here (employment) and here (ethics).

--

--