Generative Modularity and General Intelligence — Part 2

Photo by sk on Unsplash

Charles Darwin ended his first edition of “On the Origin of Specieswith this sentence:

There is grandeur in this view of life, with its several powers, having been originally breathed into a few forms or into one; and that, whilst this planet has gone cycling on according to the fixed law of gravity, from so simple a beginning endless forms most beautiful and most wonderful have been, and are being, evolved.

Previously, I wrote how the complexity in the universe emerges and is a consequence of twelve principles of modularity. Here I will examine how life and general intelligence arises from modularity principles. In other words, the ‘several powers’ that Darwin may have failed to be explicit reference. I summarize the principles from my previous post here for your convenience:

  1. There are something that binds and something to be bound. Binding creates identity and interaction.

2. The universe evolves in the direction of higher entropy.

3. A medium with fluid characteristics drive experimentation and thus innovation.

4. Competitive forces leads to destruction that then leads to new composite creations.

5. Binding is selective and drives the fitness of a context.

6. Composite components leads to richer interactions and greater possibilities.

7. Intrinsic adaptivity leads to utility that leads to ubiquity.

8. Adaptive components reduce entropy by learning the regularities of the environment.

9. Evolution requires only what is adjacently possible.

10. Robust error resistant encoding preserves learning across component lifespans.

11. Novel learning are gained by symbiosis of complex behavior that was learned under different contexts.

12. Innovation is contextual, complex or simple solutions are driven by what is possible and not what is simple or optimal.

We are going to embark in a more challenging endeavor. This is where we reveal how the inanimate universe gravitates towards the creation of life and how life creates the conditions for general intelligence to emerge. The overarching reason why complexity exists is that there are specific modularity principles (alternatively constraints) that must emerge that encourages cognitive innovation. I wrote previously about a theory (Dissipative Adaptation) of how life originates from simple molecules. Evolution leads to structures that create memories of its environments to eventually structures that become self preserving and self-replicating (see: Constraint Closure).

Terrence Deacon proposes a theory of biological information:

https://en.wikipedia.org/wiki/Incomplete_Nature

that I do find useful in framing the difference in behavior of inanimate objects, biological entities and cognitive entities. At the first level, Shannon information (capacity) describes the preservation of information signaling and its measure in Shannon entropy. The second kind of information (referential) is described by Boltzmann entropy which is descriptive of how dynamical systems tends towards equilibrium. Ilya Prigogine had previously proposed that order is created in far from equilibrium conditions. The final kind of information (usefulness) describes the semantics of information that conveys the constraints available to an agent. That is, what information is available that can tell an agent of the constraints or availability of possibilities that permit an action to be performed. Information that is useful in biology tells an organism the constraints available for it to make a prediction of the best action to take. I’ve previously discussed in greater detail how Shannon Information is inadequate and we need to consider constraints between an agent and its environment. Deacon’s hierarchy is unfortunately incomplete because it does not address the process of learning. A hint at a much more comprehensive model is reflected in information asymmetry.

Biological life is distinct from inanimate objects in that behavior is encoded and its purpose is to employ the encoded behavior to survive and replicate itself. In other words, all biological life have purpose. This purpose is coded in DNA for complex life. An acorn grows into a tree because it has behavior that is coded into it that leads to its purpose (BTW, having purpose doesn’t imply the entity is aware of its purpose). The behavior that is encoded will be the behavior that leads most likely to survival within an environment. So if you take all the random encoded behavior and given enough time (and the ability to replicate), what would be the most likely organism to become ubiquitous. The answer would be the organism that is most likely to survive and replicate within an environment in the most ubiquitous one. The thirteenth principle of modularity: Organisms that are ubiquitous are those with predictive behavior that are compatible with their environment.

An organization’s purpose or its encoded behavior leads to a different kind of causality. Inanimate objects (atoms and molecules) react to their environment based on the hard coded laws of physics. The mechanism that leads to richer interactions is through the composition of complex molecules. The configuration of a molecule defines the kinds of interactions it is capable of performing. More concretely, it’s behavior is encode in its configuration and therefore is static. Behavior change (i.e. dynamic behavior) occurs only when components are reconfigured. So like when two hydrogen atoms bind with oxygen to create a water molecule. The new configuration has behavior different from its components.

In contrast, biological life doesn’t require re-configuration to exhibit dynamic behavior. Analogous to a Von Neumann architecture that can run different programs without reconfiguring its circuitry, biological life runs of its own instruction set that drives its dynamic behavior. Unlike a Von Neumann computer however, life’s instruction set is not stored and executed in a centralized processing unit (i.e. CPU), rather it is distributed throughout its sub-components. The better computer architecture analogy would be what is known a Dataflow architecture. In Dataflow architecture, multiple parallel processes are active and instructions are carried in tokens that are executed only when all the pre-conditions of an instruction are satisfied. Natures’ computation is intrinsically massively parallel. The fourteenth principle of modularity: Dynamic behavior is enabled by decoupling behavior from configuration.

Michael Levin’s discoveries in bioelectric computation reveals how nature employs electrical connectivity to coordinate morphogenesis of life forms. That is, how do organisms coordinate their own construction. How does complex life create limbs, organs and brains? Apparently, there is a very primitive organizational substrate that exhibits a computational mechanism that builds structure. This is the kind of cognition that is commonly referred to as swarm intelligence. Swarm intelligence is emergent global behavior that arises from the interaction of many decentralized individual autonomous participants. Ant colonies, the immune system and the human economy are examples of swarm behavior (see: Stigmergy). The fifteenth principle of modularity: Adaptive behavior is driven by decentralized intelligence.

More advanced species eventually develop a central intelligence through the evolution of a central nervous system. A CNS is where behavior is coordinated from a center that receives sensory input and aggregates this information to arrive at an action. Despite this centralized behavior, it is important to note that this cognitive behavior of biological systems is very different from that of human computers. Swarm intelligence continues to be prevalent in biological brains and it’s still unresolved as to how this central coordination converges. The coordination problem perhaps relates to how intrinsic motivation leads to goal achievement. With CNS, the behavior of an organism becomes top-down, Terrence Deacon coins this as teleological behavior.

The working of the entire universe is identical to computation. This isn’t actually a profound statement. All this statement means is that the universe evolves by respecting causality. It simply states that there is always a causal link from an event that happens with events that happens in the past. The only time events ‘just happen’ without a cause is in quantum mechanics. However, this happens in small enough probabilities that its causal effect is naturally indiscernible (unless it’s engineered like in semiconductors). It is thus sufficient to assume causality holds and therefore everything is computational.

Biological life appears to violate a modularity principle. This principle is captured in the 2nd Law of Thermodynamics (see: Schrodinger’s What is Life?), that is everything tends towards entropy. Life by contrast, appears to create order from chaos. What life actually does is that it exploits the existing order found in its environment and employs that to create order within it’s own structure. Life still creates greater entropy in the universe, but it creates order internally. Said more simply, useful energy (i.e. food) is taken in and returned as less useful energy (i.e. waste). It takes energy to maintain order.

Simple life is driven by behavior that is based on stimulus and response. However, more complex life requires anticipative behavior. That is, complex life requires an ability to predict what will happen in the future and then behave dynamically different based on its predictions. This kind of dynamic behavior, one where behavior doesn’t appear causal, is an area where reductionists have historically steered clear of. I suspect the reason why this is avoided is due to its complexity and thus its intractability. Reductionists use their investigative methods on simple tractable problems and consider complex irreducible problems to be out of scope. Out of scope, should not imply that these problems are invalid for scientific investigation. Rather it should acknowledge the limitations of our current investigative methods. Being perplexed about how to reason about beings that make predictions isn’t a good justification to ignore the subject in its entirety. Life still exhibits causal behavior, it is just more complex such that the causal relationships are less obvious and are encapsulated in internal predictive encoded behavior. The sixteenth principle of modularity: Adaptable behavior relies on predictions of the future.

Evolution leads to the central nervous system and a centralized brain. This brain continues to evolve higher and more complex layers. From the brain stem, the limbic system and finally the neocortex. The evolution of each layer is a consequences of the increasingly competitive demands of the environment. Essentially there is an arms race in terms of cognitive development, where more adaptive and nimble species are able to triumph. Evolution doesn’t just grow one kind of brain that just gets bigger over time. Rather evolution leads the structure that was required at the time and new structure is created to compensate for functionality not present in the older structure. Evolution arrives to stepping stones that may or may not be needed in future versions of a species. Because humans came into existence via evolution, we are saddled with a lot of cognitive baggage as a byproduct of the process.

Accurate prediction requires the construction of mental models of causality. The neocortex enables mammals to learn to imitate, imagine virtual worlds and explore alternative behaviors in this world. These cognitive capabilities give mammals a richer repertoire of survival skills. A mammal does not need to live an experience to learn from it. Rather, it can learn from a proxy of the experience. The ability learn by proxy without actually living an experience disassociates a living being from a harmful or even existential risk. Prior to the neocortex, learning was driven by evolution and thus required surviving from an experience. The seventeenth principle of modularity: Learning must be achievable separate from experience.

Higher level cognitive behavior is a consequence of many environmental factors. In general, more intelligence species are those that live in social environments. Primates, elephants, whales, dolphins and seals are all social animals that have large brains. A social environment requires minds to simulate other minds to arrive at good predictions on how best to interact within a social group. Human intelligence derives from the need for social intelligence which also happens to be both competitive and cooperative intelligence. Survival is dependent on the individual understanding that the nature of the collective. The seventeenth principle of modularity: Social cognition becomes essential to survival.

In general, animate biological creatures require a cognitive notion of self that drives its purpose of self-preservation and replication (see: Von Neumann Self-Replicating Automata). This notion of self varies in sophistication, but it is most sophisticated in social animals where self-awareness emerges. That is, being able to simulate other selves leads to the ability to simulate oneself. Thus, one becomes self aware of one’s own cognition. This self awareness of one’s own cognition is otherwise known as consciousness. We’ve gone full circle, understanding of the collective now becomes understanding of the self. Understanding of self requires the recognition of identity. That is, the very first principle. The eighteenth principle of modularity: Mental models of self leads to consciousness.

https://medium.com/intuitionmachine/moravecs-paradox-implies-that-agi-is-closer-than-we-think-9011048bc4a1

What we begin to realize is that information modularity that exists in our universe becomes available to us in our own virtual worlds (i.e. mental models). Human cognition handles concepts and ideas in a modular way, connecting concepts of high utility and discarding useless concepts. Human cognition is in fact a mirror analog of the same process of evolution. Evolution concerns itself with real entities, while cognition concerns itself with virtual entities. Evolution requires the fluidity of a medium, while the mind requires fluidity of thought. Humans spend a considerable time as infants maintaining a level of neural plasticity required for advanced cognitive development. Other primates do not have this luxury. The mind continuously explores connections between concepts (i.e. Douglas Hoffstadter calls this analogy making) to create newer concepts of greater utility. All our cognition is based on the prior concepts that have been created in our minds.

Now that we have human general intelligence we can then expand this narrative out to the development of language, technology and civilization. The same principles of modularity remains applicable at each higher stage of development. This is a narrative the connects the Big Bag all the way to the emergence of general intelligence. It is intriguing that we aren’t able to make the connections without the more modern understanding of information modularity. These principles are universal and renders a satisfying narrative that connects the inanimate universe with a universe that has become self-aware.

Notes

Shannon’s entropy relates to the coding and decoding of information. So principles 10, 14, 17 are relevant to it. Boltzman’s entropy, that is the evolution of bulk matter is relevant to principles 2,3,7 and 13. Deacon’s significant information is relevant to 5,6,8,9,12 and 13. My proposed information asymmetry relates to 4,8,9,11,15 and 16. “Generative Modularity” remains a work in progress and I do hope to formulate a more holistic framework in the future. More detail of this can be found in “Generative Model”. There’s a lot of moving pieces, but I can see them all coming together as a coherent whole.

Further Reading

Principles of Minimum Cognition

The grammar of mammalian brain capacity