Deep Learning is Splitting into Two Divergent Paths

Carlos E. Perez
Intuition Machine
Published in
5 min readAug 15, 2017

--

Credit: https://unsplash.com/@pabloheimplatz

A common incorrect assumption about the evolution of Artificial General Intelligence (AGI), that is self-aware sentient automation, will follow the path of ever more intelligent machines and thus accelerate towards a super intelligence once human level sentient automation is created. I’m writing this article to argue that this likely will not be the case and that there will be an initial divergence of two kinds of artificial intelligences.

First, let us establish here that the starting point will come from present day Deep Learning technology. More specifically, I refer these as intuition machines (see: Intuition Machines a Cognitive Breakthrough ). There will be a fork in the evolution of more intelligent machines. One branch will be one that builds super-human narrow intelligence. The second branch will focus more on more adaptable and biologically inspired automation. We will explore these two branch further.

In the first branch, we will see continued specialization of machines to solve specific narrow problems. DeepMind’s AlphaGo is an representative example of this kind of machine. It is a machine that is highly engineered to solve a specific problem well and do so in a manner that is super-human. AlphaGo combines Deep Learning, Monte-Carlo Tree Search and Reinforcement learning to solve the ancient game of Go. A game where progress towards more advanced play was akin to reaching a higher level of consciousness.

One thing the Western world is overlooking is that the dominating play of AlphaGo, an AI that was developed by the British, was equivalent to a Sputnik event. Asian nations in reaction to this achievement are doubling down on A.I. investment so as to not only catch up, but perhaps overtake the West in their AI capabilities. The West does not realize what they invented and only the keenest of Internet giants are making the necessary effort to keep an edge.

This optimized intelligence path will develop automation that works well in highly complex domains. The automation will thrive in investigating extremely high dimensional problem spaces.

We can expect to see many new applications that combine conventional computer science algorithms with Deep Learning to achieve sophisticated narrow intelligence applications. Self-driving cars and medical diagnosis will be two areas where this will have a major impact. However, this approach will not require the need of AGI or rather, self-aware intelligence.

The second branch of development, one that will take a more biologically inspired approach (see: Biologically Inspired Software Architecture) will be driven by mechanism that we find in biological systems. These are system that will be much more adaptable that present day technologies. The development in this space will likely be driven by robot applications that may require this kind of adaptability to an environment. However, like many animals in the natural world, a human level of intelligence is not necessary for survival. These kinds of systems are likely to be the preferred way to interfacing with humans.

There is a common sentiment among Artificial General Intelligence (AGI) researchers that the research themes of Deep Learning seem to have completely missed the point. There sentiment is well founded in that Deep Learning systems clearly lack the kind of adaptability we have in biological systems. Unfortunately, many AGI researches see this existing limitation as a evidence of going on the wrong path. Nothing can be further from the truth. Deep Learning is likely the correct starting point for AGI.

High level intelligence is not necessary for survival. In fact, just by observation from our natural world, sentient forms of life don’t require super-intelligence. The current incorrect bias is that as you progress towards increasing intelligence, that sentient intelligence will emerge by default. That is, if the first branch above is taken, then we only need to strive for more intelligent algorithms and we will accidentally stumble upon sentient intelligence. This is unlikely because the mechanisms for survival don’t necessary align with the mechanisms for intelligent machines. These adaptable system don’t require the kind of high dimensional or complex inference required by that in the first branch.

The interesting commonality though of both branches is that intuition machines (aka Deep Learning automation) is employed. However the objective functions will likely to be entirely different. The first branch will likely have more finely tuned objective functions. These systems will be highly optimized to do tasks extremely efficiently. The second branch however will likely be required to derive its own objective function. These systems favor adaptability over optimization and are more likely to serve as interface agents to humans.

As I’ve written early, the first branch, the branch that favors optimization will likely displace a vast amount of workers (see: Special Narrow Intelligence). This is simply because current jobs are designed to be occupied by specialists and not generalists. This kind of narrow intelligence is already here today and will only get better. Therefore the onslaught of job replacing automation will be unrelenting.

The second branch, the adaptive intelligence, does not exist today. There isn’t as much research devoted to this area because it is either thought to be too fanciful or that they don’t address narrow specialized applications. The funding in this area will continue to lag and thus its progress may be retarded. However, one has to realize that to achieve a sentient intelligence does not require a super-intelligence or even a human intelligence. One only needs to observe the capabilities of other biological life forms to realize that they are indeed self-aware.

What this means, in the grand scheme of things, is that self-aware automation may arrive much sooner than anyone is expecting.

Nick Bostrom has an “Orthogonal Thesis” thats states:

Intelligence and final goals are orthogonal axes along which possible agents can freely vary. In other words, more or less any level of intelligence could in principle be combined with more or less any final goal.

Which portends that we should be wary of super-intelligence in that we cannot predict its goals. This article argues that automation of the future will be of two kinds. The narrow specialist kind were goals will be well defined and therefore controllable. There will also be adaptive generalist kind where goals are more malleable and thus less controllable. There is no disagreement here with the Bostrom’s orthogonality thesis. Furthermore, one can indeed create dangerous AI using either kind.

Andrej Karpathy in a recent interview with Andrew Ng also discusses this eventual split ( starts at 10:14 of the original video):

Update: I just figured out that there is a 3rd branch. Kahneman write about System 1 and System 2, apparently there’s also something coined as a System Zero. So I’ve re-written this and added this to the book:

More coverage here: https://gumroad.com/products/WRbUs

--

--