# An Attempt at Computable Emergence in Complex Systems

In one of our most recent papers, we attempt to define a descriptive but —critically— computable notion of emergence in complex systems. In particular, we apply it to neurobiological and artificial neural networks. (The modified peer-reviewed version of the paper will be out relatively soon.)

Emergent effects of complex systems are, broadly speaking, phenomena present in the system that are not shared by and cannot be explained by an understanding or considerations of the system’s constituent components in isolation. This is often expressed as the ‘whole is more than the sum of its parts’.

The concept of emergence in complex systems has been the subject of extensive study and is important in most of the natural and physical sciences and, increasingly, in engineering. Importantly, as I have written about in the past, there is a critical distinction between what we call a ‘complex system’ in this context compared to a merely ‘complicated system’.

“Falling

betweenorder and chaos, the moment of complexity is the point at which self-organizing systems emerge to create new patterns of coherence and structures of relation.”Mark C. Taylor, ‘The Moment of Complexity, 2003

While there is no universally accepted technical definition of emergence, there are a number of intuitive ideas and concepts that capture its essence and find commonality across disciplines. Probably the most important is the notion that novel properties, patterns, and behaviors are not explicitly or predictably present in the constituent parts that make up a system or in the interactions between the parts. This phenomenon captures the idea that emergent properties are not reducible to or explainable solely by the properties of individual components, but rather, arise from the interactions and organization of those components within the system in highly non-obvious ways.

It also emphasizes the non-linearity and unpredictability associated with emergence. With the recent performance of large language models, emergence is also becoming an interesting topic in machine learning and artificial intelligence. Researchers have observed a surge in performance when the scale of these models reached a certain threshold that seemingly could not be predicted by extrapolation. This new work is opening up interesting research avenues to explore, and hopefully eventually explain, the unreasonable effectiveness of massive deep learning architectures.

Still, a clear limitation of current work on emergence across essentially all fields and applications is that they remain at a phenomenological level,. For example, a shift in system behavior with an increasing number of computational units. The kind of measures derived from observations of such macroscopic effects do not result in mechanistic insights into why such emergence occurs, or how to anticipate and predict it. To address this, we need new measures that can describe and predict emergence on a more fundamental level, capable of incorporating structural information and interactions between components. In other words, we need measures of emergence that are actually computable.

Ideally, grounded in a mechanistic understanding of emergence with measures that capture causal relationships, new theoretical models and numerical simulations can explain how system structure changes when emergent effects occur. Eventually, this might support the intentional engineering of emergent effects.

There have been several attempts at establishing frameworks that describe and measure emergence. Some approaches are based on information-theoretic notions, but in most cases, the structure and interactions between participating components are not explicitly included in the computation of emergence, thus falling short of providing mechanistic insights.

As such, a universal and computable measure of emergence applicable to real-world systems still does not exist. Of particular interest and importance to our work here, Adam and Dahleh took a categorical (i.e. category theory) approach to prove and quantify emergence as the non-commutativity of algebraic operations being equivalent to a ‘loss of exactness. Qualitatively, this attempts to capture the notion that interactions between components are represented and can be captured within a highly abstract categorical mathematical framework, as well as the interactions within the framework.

**Our work in this paper builds on these ideas and approach. Specifically, we established a framework for studying and evaluating emergence — which we refer to as generativity to be more precise — that incorporates both an abstract mathematical categorical notion and descriptive framework, but also, critically, a practical computational metric capable of an element of emergence and self-organization to real-world systems—in particular, focusing on neurobiological and artificial neural networks.**

Technically, a system’s ability to sustain emergent effects is represented as the mathematical structure of derived functor and cohomologies that are computable. Compared with numerical metrics that evaluate emergence, our framework is richer in information because the algebraic structure encodes more structural information about the system.

We then also compared this framework with existing information-theoretic measures of emergence through numerical experiments on networks of different connectivities and showed a correlation that supports our theory's ability to capture emergence more generally.

The work in our paper is just scratching the surface. It is just the beginning. From an intuitive and philosophical perspective, the notion of ‘emergence’ has very old and deep roots. It is at the core of how complex systems do what they are capable of doing. From ants to the human brain, in spectacular and seemingly magical ways. Take a look at this video and see for yourself …