How to Grow the Innate Machinery for AGI

Carlos E. Perez
Intuition Machine
Published in
6 min readNov 4, 2017

--

Source: https://unsplash.com/@cant89

Gary Marcus and Yann Lecun held a fascinating debate that explored their divergent approaches towards more intelligent machines. I recommend everyone to watch the debate because it reveals the vast chasm that we still need to traverse:

The debate began with Gary Marcus believing that his views were consistent with Yann LeCun’s views. Gary Marcus in the early 2000 wrote “The Algebraic Mind” criticizing the neural network approach and arguing that new kinds of functionality are required to achieve intelligent machines that can match human cognitive capability. He is correct in highlighting the deficiencies of current systems. What Marcus seems to lack is the insight on how to build the missing cognitive pieces. Here are Marcus’ essential cognitive capabilities:

Representation of objects

Structured, algebraic representations

Operations over variables

A type-token distinction

A capacity to represent sets, locations, paths, trajectories and enduring individuals

A way of representing the affordances of objects

Spatiotemporal contiguity / conservation of mass

Causality

Translational invariance

Capacity for cost-benefit analysis

Marcus believes that these capabilities are innate machinery that exists in humans. In the debate he reveals some work by the Allen Institute that estimates that 90% of DNA encoding is used to reconstruct (I guess via indirect encoding) the machinery of the brain. Marcus laments the lack of focus in AI research to address inventing these innate machinery. Marcus also believes that we currently have sufficient computational resources to be able to run these innate machinery (when it is discovered).

Yann LeCun by contrast believes differently. That is, we need ever more computational resources and that our estimates of the computational requirements of the human brain is grossly underestimated. LeCun observes that progress in Deep Learning has shown that we can learn sophisticated cognitive capabilities with ever simpler machines. Historically, computer vision has been constructed with human engineered components. However, since 2012, ConvNets have bested all the best engineered algorithms. This was done by learning vision capabilities from scratch. LeCun argues that we should not fall again to that approach of hand engineering innate machinery.

The Deep Learning community is not devoid of engineering new kinds of networks. On the contrary, a majority of research is about proposing new kinds on network architectures for many different domains. The approach is however vastly different from previous approaches. Rather than fully specifying a new kind of component for a specific domain, the components tend to be more generic and are trained to conform to a domain. One could make the analogy of growing a plant. Researchers only define the scaffolding and simply performing tweaks to guide a network to “grow” into a desired shape.

The concern LeCun has about defining innate machinery is can be sub-optimal and even worse, it could be wrong. The guideline that LeCun uses that the components should start out as simple as possible. He invokes Occam’s Razor as his justification for his approach. He also identifies the semantic gap that needs to be bridged between sub-symbolic and symbolic systems. Furthermore, he re-iterates his need for unsupervised learning or his own term ‘predictive learning’.

Marcus by contrast believes that he understands the specific kinds of innate machinery that needs to be built. However, he is unable to express how it is going to get built or even how these innate machinery are going to be composed together to work in as a coherent whole. It is not enough to speculate about what’s missing. One has to at least articulate how the missing parts are all going to fit together. One gets this impression that Marcus proposal identifies the sub-routines that need to be in place, but can’t express how these sub-routines are to be composed into a purposeful program.

What are we thus left with is something quite unsatisfying. Marcus has some ideas of what needs to be built, but has no idea of how its going to be composed together. LeCun knows what learning algorithms are missing, but he has very little guidelines on how to design new ones. I think LeCun is more correct than Marcus in that he has at least a semblance of a prescription to get us there. Marcus only understands where we need to go, but doesn’t appear to have any insight on how to put it together.

Allow me to proclaim greater insight than these two esteemed debaters.

This of course is a problem that is solved at the meta meta-model. I discussed this earlier in “The Meta Model and Meta Meta-Model of Deep Learning”. We can only create this innate machinery if we have the vocabulary to create them. That is, we need to discover building blocks of the innate machinery. If we do have these building blocks, then evolutionary methods, can make this happen. The reason that we need evolutionary methods here is that there are likely no first principles beyond a loosely coupled principle that can drive this design. Furthermore, the language of evolution that already exists is evidence that it indeed possible to create the innate machinery we need for cognition. This language is already encoded in our DNA and instances of which are present in our brains.

The vocabulary at the meta meta-model layer should conform to the Loosely Coupled Principle. Specifically, we demand the use only of building blocks (note: innate machinery is composed of building blocks) that follow any of the loosely coupled design patterns. These are described in the table below:

These patterns constrain the design space so as to avoid any approach that has tight coupling characteristics. It is a more nuanced approach that simply invoking Occam’s Razor as LeCun has done. It is still in the same spirit in that any loosely coupled approach demands the minimal amount of assumptions. If you are wondering where these patterns come from, they from the study of interoperable protocols of a previous decade. (In a book that I had never was able to finish).

In short, to minimize the friction of composing coordinating agents, one has to favor protocols that encourage interoperability. The protocols are of the loosely coupled kind, the kind that assumes the least (assumes the minimum assumptions) to make things happen. So when we speak about how these innate machinery are to be built and how these innate machinery are going to coordinate to do anything, we ultimately need to originate from the same set of principles.

When you have mechanisms that encourage interoperability then you have a fighting chance to some emergent self-organization. When you bake in too many assumptions, then a component becomes less modular and thus less able to participate in composition.

The essence ingredients of natural evolution is that you have a medium (i.e. fluid water) that gives molecules the opportunities to seek out combinations and it needs molecules that have mechanism to compose with other molecules. There is a search mechanism and a mechanism for composition. At present, we have generalizations of the search mechanism, but we don’t know what those molecules are.

How are the innate machinery actually going to be created? We use evolutionary methods to create them. That is therefore requires the development of proper environments to grow these machinery. That is, like the board game in the game of Go, we have constrained environments to incrementally grow innate machinery.

.

Explore Deep Learning: Artificial Intuition: The Unexpected Deep Learning Revolution
Exploit Deep Learning: The Deep Learning AI Playbook

--

--