Self-controlling living systems are hierarchical

Wolfgang Stegemann, Dr. phil.
Neo-Cybernetics
Published in
6 min readAug 26, 2023

Life can in principle be described as an autocatalytic reaction cycle that exchanges energy, entropy and information with its environment. This exchange is not added to the system, but is part of the system. The resulting products have a modified effect on the system or change its shape and further drive autocatalysis. Take the genome, for example: the products produced by the genome (proteins) act through all regulatory levels (cell, cell cluster, organs, CNS) towards the bottom-up, ultimately into the environment, and act as an epigenetic influence (top-down) back on the genome by switching genes on and off there and thus changing the genome in the long term. This epigenetic and genetic change can be represented in graph theory as a change in the weighting of edges and nodes as well as a change in structure density.This means that in every living system the probability of reaction changes and thus attractors result, which in turn act as agents. Thus, in living systems, causality is reversed compared to inanimate systems.

I had already described the relationship between the increase in structure density and thus the probability of reaction with the two equations [1]:

1. C = \sum_{i=1}^{n} \sum_{j=1}^{m} d(i, j) \cdot w(i, j)
for local density

2. k = A \cdot e^{-\frac{E_a}{RT}} \cdot f(d)
for reaction probability

The different system or network theories describe different aspects with their approaches, but make no distinction between living and non-living systems [2].

The crucial difference between the two is the hierarchical Überlagerung.In living systems, local densities are constantly combined into new densities. This means that in a network, topological peaks connect with other peaks to form a network of higher density and thus higher order. Networks therefore abstract lower densities and couple only with higher densities. It is the same process that can be described as abstraction in cognitive psychology.

This can be represented as N = \frac{k²}{2m} where N is the number of levels in the hierarchy, k is the number of connections between two levels, m is the number of elements in each level.

N = \frac{k²}{2m} describes the ratio between the total number of nodes in the network (N), the average number of connections per node (k), and the total number of edges in the network (m).

In this equation, k represents the average number of links per node and m represents the total number of edges in the network. An edge represents a direct connection between two nodes.

The equation states that the product of k² and 2m is equal to the total number of nodes in the network. This can be seen as a mathematical relationship between the structure of the network and the parameters that determine the number of connections per node.

To describe the scenario in which all the peaks in a network connect to form a new network of higher density or higher order, one could use a mathematical equation to model this process. Here is a possible description:

Suppose we have a network with N nodes and E edges. The density of the network is defined as the ratio of the actual number of edges to the maximum possible number of edges. The maximum number of edges in a fully connected network with N nodes is N*(N-1)/2.

In each step, the nodes with the highest degree (peak) are selected and combined into a new node. The edges between the selected nodes are removed, and a new edge is created between the new node and the neighboring nodes of the selected node. This process is repeated until there are no peaks left.

A possible mathematical equation describing this process could look like this:

N’ = N — k (number of nodes in the new network after the merger) E’ = E — k (number of edges in the new network after the merger) D’ = E’ / (N’ * (N’ — 1) / 2 (density of the new network). Here k stands for the number of selected peaks that merge into a new node. N’ and E’ represent the number of nodes and edges in the new network, while D’ is the density of the new network.

These equations describe the process of fusion of peaks and the associated changes in the size and density of the network. You can now adjust the equations to name more specific conditions or parameters for the model.

It is a scale-free process that applies to a single cell as well as to an entire organism. How the spatiotemporal expression of this process takes place depends on the general conditions, i.e. on the material and environmental conditions.

If we assume that a network with a higher density has a higher information content and this results in an information gradient to the network with a lower density, we can describe this mathematically. Here’s a possible course of action:

Suppose we have a high-density network (ND) and a low-density network (NL). To describe the information gradient, we could quantify the difference in information content between the two networks. One way to do this is to use the relative difference in information content between the two networks.

Igradient = (ID — IN) / IN , where ID represents the information content of the high-density network (ND) and IN represents the information content of the low-density network (NL). The information content here can be considered as a function of the density of the network.

Assuming that the information gradient (Igradient) between a high-density network (ND) and a low-density network (NL) has a causal effect on a particular phenomenon or variable (e.g., V), we can express this as a mathematical relationship:

V = f(Igradient)

This means that the network with high information content determines variables of the network with lower information content and influences it accordingly. It would be a description of the principle of the placebo effect, for example, in which somatic processes can be mentally changed.

Self-controlling living systems are autocatalytic and form hierarchies by combining topological peaks into networks of higher density by abstraction, thereby unfolding causal effects by changing parameters of the network of lower density.

Networks with higher densities therefore do not completely determine those with lower densities, i.e. they leave their autonomy in the sense of their mode of operation. By changing the parameters, however, they force them to adapt their way of working to their specifications.

It is therefore not an absolute hierarchy, but a relative one.

This is because the network with a higher information density also changes itself as soon as the parameters of the underlying network change. The information gradient has effects in both directions: top-down has a regulative effect, bottom-up has a constitutive effect, since the change in variables in turn changes the network with a higher information density. One could speak of a feedback loop that is regulative downwards and constitutive upwards.

To mathematically describe the described feedback loop between the top-down regulative and bottom-up constitutive information gradients, we can use a differential equation. Here’s a possible approach:

Suppose V(t) represents the variable affected by the information gradient at a given point in time t.

The top-down regulatory influence of the information gradient can be represented by a term dV_reg(t)/dt, which describes the change of V over time due to this influence.

The bottom-up constitutive influence of the information gradient can be represented by a term dV_cons(t)/dt, which describes the change of V over time due to this influence.

The feedback loop can be represented by a feedback function f(V), which describes the influence of V on the information gradient.

Together, this results in the mathematical description:

dV(t)/dt = dV_reg(t)/dt + dV_cons(t)/dt + f(V)Here dV(t)/dt

stands for the rate of change of V over time.

Life would thus be described ontological in terms of system theory.

— — — — — — — — — — — — — — — — — — — — — — — — — — -

[1] Stegemann, W., Neural swarm intelligence? Medium, 2023, https://medium.com/@drwolfgangstegemann/neural-swarm-intelligence-cf30ad78ba99

[2] See for example: Random Networks, Graphene Networks, Scale-free Networks, Small-World Networks, etc.

--

--