Scaling to AGI via Selves and Conversations

Carlos E. Perez
Nov 3, 2019 · 10 min read
Photo by x ) on Unsplash

Arthur C. Clarke was Wrong. Any sufficiently advanced technology is indistinguishable from biology (and not magic).

Self and non-self are more fundamental than logic.

The oddest thing about Artificial Neural Networks is that they actually work despite being based on a completely false model of a biological neuron. Why Artificial Neural Networks (ANN) work remains a mystery. Understanding that “Why” can inform us why real biological neurons might work. We can make progress by identifying universal characteristics between the biological and the synthetic.

One universality that we can be certain of is that both biological and artificial neurons are pattern-matching machines. The kind of pattern matching machine depends on the purpose of the agent. In general, we can think of human brains as having five self-models that each have a goal of maintaining themselves. The goal of selves is self-preservation and maintenance. Humberto Maturana and Francisco Varela popularized this idea of “Autopoiesis and Cognition”. Different kinds of selves seek out different kinds of knowledge to achieve autopoiesis. Perhaps what Maturana and Varela didn’t know in 1972 is that the human brain has five distinct selves.

Another universality in common between ANN and biological neurons is what Daniel Dennett calls ‘Inversion of Reasoning’ or ‘Competence without Comprehension’. This is a universal characteristic of many kinds of evolution. I define here evolution as a knowledge discovery process. Natural evolution has many species discovering fitness. Biological brains have neurons optimizing their inference to best fit with their niche. Technological evolution makes progress by finding and combining different technologies to form new more useful technologies. Dennett’s Inversion of Reasoning explains how competence only processes can create cognition that appears like comprehension. Dennett’s book “Bacteria, Bach and Back” further explains how this same framework operates in human society.

What we have recently discovered is that biological neurons are individually unimaginably complex:

But how do individually complex neurons collaborate towards a coherent and purposeful whole? A ‘Theory of Intelligence’ requires a richer understanding of how competence is scaled in collective intelligence. Almost all models of intelligence are uninspiring because they fail to address the notion of nano-intentionality and its collective behavior. What I am trying to say here is that biological neurons have individually complex behavior. That is, each neuron has a sophisticated level of intentionality. The importance and pervasiveness of intentionality in biological systems were first proposed by Rosenbleuth, Wiener, and Bigelow in their 1942 paper “Behavior, Purpose and Teleology”. This is one of the earliest realization of the massive complexity of biological systems. Biology consists of cells that are not simple stimulus-response systems.

From Michael Levin lecture: https://www.youtube.com/watch?time_continue=318&v=RjD1aLm4Thg

One reason that Wiener’s Cybernetics approach was lost in history was that a competing narrative coined as ‘Artificial Intelligence’ focused instead on the new emerging paradigm introduced by digital computers. Computers essentially made possible the capability of ‘Artificial Logic’. This appealed to the prevalent Western bias for Descartes’ dualism. Thus the original idea of achieving artificial intentional machines was hijacked by an alternative narrative. This led to decades of favoritism for the GOFAI approach to intelligence. Intelligence is fundamentally built bottom-up (as proposed by Wiener) and not top-down as proposed by the Dartmouth conference.

Biological systems are not like technological systems that are designed using an additive construction process. Rather, they take a very different process, that is, biology works by differentiation of existing nano-intentional components. Billions of years of evolution have created sophisticated cells that are able to differentiate into a multitude of capabilities that are relevant at different scales of competence.

Natural evolution and technological evolution have commonalities. However, they differ in design. There are two dimensions of design, one dimension involves the availability of building blocks and in another it the capability of composing these building blocks together. Biological systems can are more limited in what is available as compared to technology. Think of it as a spectrum rather than one over the other.

Biological innovation is unlike technological innovation in that they primarily employ parallelized invention processes rather than sequential processes. The reason human invention tends to favor sequential processes is that our minds require chunking to understand complexity. Note that human civilization innovates using parallel threads but there’s always a convergent point where a mind stitches together the final solution.

Scaling intelligence, however, requires coordinating parallel cognitive processes to drive faster innovation. This parallel engine of innovation generation is present in all kinds of evolutionary processes (i.e. natural evolution, brains and cultural). At its core, nano-intentional agents coordinate via complex conversations.

This is where we discover the limitations of the methods of physics. To understand emergent innovation from comprehension-free evolution one needs to understand the nature of generative modularity:

When you work yourself up from quarks to living organisms, you eventually arrive at the invention of the “self”. Nano-intentionality by definition requires the encapsulated self-preserving notion of the self. A self manages it’s interior and cooperates with its exterior environment.

I use the notion of ‘beyond the laws of physics’ in the same way that Stuart Kauffman uses it. That is emergent phenomena are difficult to state within the current framework of physics. That is why I’m exploring Constructor Theory, which is a different way of explaining physics.

Brains consist of multitudes of selves in conversation with each other bubbling all the way up into a manifestation of consciousness. I am motivated to create a delineation for five selves (actually 10 since we have to consider a split-brain) to tease out the intrinsic motivations of each self. It’s unclear to what extent these motivations are in conflict, but if they are, then it is through conversation that coherence in thought is achieved. Biologically, these selves reside in separate areas of the brains and thus a kind of rich conversation between the areas is necessary. I will avoid the notion of IIT that demands a notion of whole-brain dense integration.

One cannot understand human cognition without including the notion of multiple models of self. Instead of “Turtles all the way down”, biological brains are “selves and conversations” all the way up.

Biology has invented “selves and conversations” billions of years before homo sapiens. The complexity of survival at the cellular level doesn’t require less cognitive ability than that of the scale of human cognition. It is simply on a different scale with different problems.

Multicellular creatures are not necessarily more robust than single-cell creatures. It is just that multicellular creatures employ a different strategy towards fitness. Wired brains with neurons are not necessarily more fit than liquid brains (i.e. bees and the immune system). It’s just that they are configured to solve different problems.

I speculate that both neurons and T-cells (part of the immune system) have the same cognitive machinery. They differ in that neurons have connectivity and are recruited for more narrow tasks. Neurons may just be domesticated versions of more feral T-cells. Domesticated also in the sense that they are all competing for relevance in a social structure that is explicitly defined by neural wiring.

This perhaps how Artificial Neural Networks relate to biological neurons. The narrow complexity of ANN might be just a slice (or projection) of the overall complexity of a biological neuron. Perhaps it’s just the slice that performs the most rudimentary of pattern matching. Yann LeCun might chastize this as a kind of glorified template matching. This seems like a sound way to reconcile the limited complexity of ANN with the massive complexity of real biological neurons. ANNs are just a slice of an order of magnitude more complex things.

But how does “selves and conversations” all-the-way-up model of the brain scale up its intelligence? One compelling approach is hinted by Chuck Pezeshki which he calls Structural Memetics. Collective intelligences are in many ways more intelligent than the individual constituents. What then are the connectivity (or social network) that binds these together?

Pezeshki ties together the social structure of human organizations with how knowledge is structured and how this drives the design process. He is inspired by Conway’s Law that states that:

organizations which design systems … are constrained to produce designs which are copies of the communication structures of these organizations.

In the same way, my approach will is that the organization of nano-intentional neurons will be constrained to produce cognitive behavior that reflects the conversational structure of these organizations. Think of it as Conway’s Law applied to the biological brains instead of human organizations.

Pezeshki adopts ideas from Spiral Dynamics to identify different social structures:

The brain is structured differently from social organizations. Rather, it is influenced by evolution. Paul Cisek argues that an alternative taxonomy that is derived from the evolutionary process is more informative than the conventional taxonomy of behavior (consisting of perception, cognition, and action). He proposes an alternative taxonomy that follows a evolutionary development of skills:

https://link.springer.com/article/10.3758%2Fs13414-019-01760-1

The evolution of the biological brain is unique. If Conway’s Law is relevant for social structures then can this evolutionary structure of the brain inform the nature of the brain’s own knowledge structures? Can we explain the many cognitive biases that humans exhibit? The basic research agenda here is that collective organizational behavior leads to emergent behavior that reflects the original structure.

There are also many methods in organizational processes that we have invented for human organizations to scale intelligence. Are any of these methods or principles also employed in the biological brain? It is indeed odd that we could use human social structures and processes to inform how the brain works. It’s a bit of a leap, however it shouldn’t be counted out entirely. We have very few models of collective intelligent behavior found in biology that we might as well mine what we already have for human organizations.

Yes, human brains inform human organizations and not the other way around. It does appear strange then to use human social structure to inspire an understanding of biological cognition. Perhaps there are some universal features in social structures. The assumption of nano-intentionality makes it reasonable that the effectiveness of human social structure might also translate to the social structure of nano-intentional neurons.

Despite the strangeness of this approach, Pezeshki is able to link up social organizational dynamics with different parts of the triune brain:

https://empathy.guru/2019/04/06/what-is-structural-memetics-and-why-does-it-matter

Spiral Dynamics has come up with the term Value Meme (aka ‘v-Meme’) that represents the value sets associated with different kinds of social organizations. Like a meme its a pattern of thought that is pervasive and shared by a given social organization. It’s analogous to the ‘intrinsic motivators’ for each self-models. An intrinsic motivator drives a learning strategy. What we have therefore are five self-models and each model has as its strategy represented by a Value Meme. Each self-model is driven bottom-up by nano-intentional agents and evolves as a consequence of sharing a common Value Meme. With the right abstractions (and some secret sauce regarding Turing Patterns), such an approach seems reasonable.

It’s important to note that the layering of self-models is not a completely new idea. Paul Verschure’s Distributed Adaptive Control (DAC) has been developing a similar model for 20 years. DAC is a theory that’s been established by multiple neuroscience experiments. DAC consists of a multilayer sensorimotor loop that is influenced by self-models at different layers. One can find in the triune brain (i.e. reptilian, paleomammalian and neomammalian cortex) areas that correspond to sense, self and action. Evolution builds the same layers but with greater adaptability at each layer.

https://www.sciencedirect.com/science/article/abs/pii/S2212683X12000102

What’s new in this idea of finding inspiration in the human organization to drive ideas for collective intelligence. It’s also interesting how this informs the nature of consciousness. Julian Jaynes theory of the Bicameral Mind is a natural consequence of it. Social structure molds our consciousness and it’s indeed possible that in more ancient authoritarian societies that consciousness described by Jaynes might have been more prevalent.

Further Reading

Artificial Intuition: The Deep Learning Revolution

Deep Learning Patterns, Methodology and Strategy

Carlos E. Perez

Written by

Intuition Machine
Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade