The Possibility of a Deep Learning Intelligence Explosion

Carlos E. Perez
Intuition Machine
Published in
6 min readNov 29, 2017
Photo by jens johnsson on Unsplash

François Chollet argues about the Impossibility of an Intelligence Explosion. It is a strong article with the exception of its conclusion. Chollet is accurate in describing many of the obstacles that we expect to encounter in creating an advanced artificial general intelligence (AGI). These obstacles are as follows (I use my own categorization, but its mapping with Chollet’s should be straightforward):

  • Embodied Intelligence — All biological intelligence in this planet exists as a consequence of the environment wherein it evolved. Biological species mold themselves to best survive in the context where they are in. In the human context, our intelligence is a consequence of the civilization that we are born into. Can an intelligence learn capabilities that go beyond the environment that they are in? Is our environment our bottleneck?
  • Collective Intelligence — Our present intelligence is a result of our civilization. Meaning it took billions of people and thousands of generations of Homo sapiens to create the knowledge and tools that we possess. No human is capable of understanding the collective knowledge of our civilization. Can intelligence grow faster than the pace of our own civilization? Is human civilization our bottleneck?
  • Bootstrapping Intelligence — An intelligence explosion occurs when there is a compounding effect on the growth of intelligence. Civilizations have allowed intelligence to accelerate. One can simply inspect the technologies that permitted greater connectivity among people. Beginning with innovations in transport (i.e. boats, trains, cars, planes), innovations in knowledge capture (i.e. writing, printing), innovations in communication (i.e. language, wired, wireless) and finally innovations in computation (i.e. mathematics, computers). The intelligence of a civilization accelerates due to connectivity between people. So civilization is one mechanism that has a compounding effect on intelligence. Alternatively, you can think of transport, language, writing, and computation as a means to compound intelligence. However, can we find another mechanism (one of computational origin) that allows intelligence to grow exponentially?
  • Intrinsic Constraints — There are intrinsic bottlenecks that exist to prevent progress and there is no way to know if these can be overcome with finite time. In general, these constraints are related to the previous problems. We are constrained by physics, our civilization, and our tools. Are there intrinsic constraints with regards to AGI? Are there constraints other than the 3 mentioned above?
  • Scalability Constraints — Can intelligence scale without limits? Alternatively, can civilization scale without limits? For the latter, we are already hitting the limits of our civilization where the limited resources of our planet put us all at risk. However, intelligence is something that can be virtualized (i.e. made digital). The constraints of physics have less of an effect in the virtual world. Can we create scalable intelligence such that a collection of intelligence is able to amplify their intelligence? Do we not see that now with extremely efficient companies are able to work collectively to build new products that an individual may intellectually be unable to do by themselves? Knowledge industries like software development have created learning platforms that amplify the intelligence of their participants.

The flaw in Chollet’s article is that he believes the pace to be linear. There is little evidence that this is true. If anything, the pace of progress has been exponential. Like most technologies, however, there will be lots of road bumps. Take for example controllable nuclear fusion. Scientists have been working on this for decades without progress. Our lack of progress could not be predicted by scientists decades ago. If you asked any scientist in the 1970s if we would achieve controllable nuclear fusion by 2017, they would in the majority say it was possible. Yet today, we are still in the research stage. The physics are well known, yet we are unable to engineer a solution.

Could AGI be in a similar situation as nuclear fusion? Are AGI researchers overestimating their own ability to achieve intelligence ‘nuclear fusion’? There are more unknowns in AGI than there is in nuclear fusion. However, AGI may not have the same hard physics constraints to deal with. Rather, intelligence resides in the realm of information processing and that world is a virtual simulation.

The primary reason why the effect of physics is negligible is due to Moore’s law. This law has been exponential for several decades and isn’t stopping! (Quanta reports of a system using atomic switches that simulate neuromorphic computing) Furthermore, Deep Learning workloads are of the embarrassingly parallel variety. So despite the silicon clock rates hovering below 4GHz for decades, we can still build more capable silicon. In 2012 when GPUs had enough computation power, Deep Learning emerges from an almost forgotten approach known as Artificial Neural Networks. To illustrate the acceleration of computation as a consequence of new designs, the Systolic Array architecture used by Google’s TPU and Nvidia’s Volta, has lead to a 1,000% speedup in computation in a single architecture upgrade.

To build an AGI, one therefore will need: (1) A way to build learning environments that can train AI’s to gradually improve its capabilities, (2) mechanisms to enhance collective intelligence, (3) a new intelligence compounding mechanism (i.e. self-play) and (4) a way to scale collective intelligence. I think we are beginning to see glimpses of this “bootstrap” or “strange-loop” mechanism in play with Deep Learning. How does a system like AlphaGo Zero learn to improve its game by simply playing itself? To claim that it is impossible to gain intelligence that is beyond one’s own experience is clearly disproven by AlphaGo Zero.

Individual technologies do have a habit of plateauing and as they mature, their growth slows and the returns diminish. We can also say the same with individual humans, that is, we learn a lot when we are younger but as we age, we slow down in our ability to capture knowledge. Both cases are however instances of closed systems and therefore there are limits to growth. Eventually, an individuals growth will plateau, just as individual technology.

However open systems, that is collective technologies and collective intelligence have greater than linear scalability. Geoffrey West who wrote about “Scale” describes the super-linear scaling of cities. Collections of individuals can scale super-linearly (115% for cities). AlphaGo Zero is not an individual, it has itself to play with! AGI has the potential to scale super-linearly if not exponentially.

Ultimately, the question reduces down to a question of information thermodynamics. Closed systems will tend to entropy, open systems do not. This is as long as new information continues to be introduced into an open ecosystem. Does an open system need to scavenge for new information in other realms or can it just synthesize (and therefore bootstrap itself) with imagined new information? Can a mathematician in his own mind imagine a new kind of algebra? Can a self-taught mathematician like Ramanujan conjure up from nowhere new kinds of mathematical identities? Can AlphaGo Zero invent new strategies without human supervision? The answer to all these is a definite yes, this is because imagination isn’t confined to physics. One can do exploration in imagination (or virtual worlds) beyond the constraints imposed by physics (i.e. time and space).

Finally, one can never discount making progress out of plain blind luck. That’s how evolution makes progress and it is likely how we will achieve the breakthroughs in AGI.

Explore Deep Learning: Artificial Intuition: The Improbable Deep Learning Revolution
Exploit Deep Learning: The Deep Learning AI Playbook

--

--