Neural swarm intelligence?

Wolfgang Stegemann, Dr. phil.
Neo-Cybernetics
Published in
7 min readAug 19, 2023

The synchronous firing of groups of neurons is reminiscent of the swarm behavior in the animal world. In both cases, the question arises as to how this kind of coherent behavior occurs and, above all, how goal-oriented behavior occurs.

Research on swarm intelligence involves two questions:
1. How do swarms form
2. Why do they exhibit intelligent behavior.
Both questions remain unanswered to this day, although there are a number of possible explanations. According to this, swarms are formed by individuals keeping as equal a distance as possible from their neighbors, adapting to their speed and striving for the center of the swarm [1]. The so-called mirror neurons are physiologically responsible [2]. The intelligent behavior consists in the higher effectiveness of the swarm in completing tasks (foraging, protection from predators). These explanations result exclusively from observations, i.e. they are already considered to be the causal relationships. On the one hand, the intelligence of the swarm is attested, but then explained by the actions of the individual agents, i.e. the intelligence is pushed back to the individual — a tautology. No approach from the realm of the living provides an ontology of swarm behavior. However, this would be important, for example, to explain the synchronous firing of neurons or to develop human-like artificial intelligence (AGI).

After all, you won’t want to attest to neurons that they observe certain rules that lead to collective behavior. After all, intelligence is supposed to emerge precisely from collectivity. The idea that one only has to interconnect enough neurons to generate consciousness is the logical consequence, and this also shows the lack of a real explanation of the paradigm of self-organization. It is an empirical phenomenon for which there is no causal explanation. Self-organization is described only as an empirical phenomenon, but how it comes about is not known. Another shortcoming is that physical theories, such as those of dynamical systems [3], are also applied to living systems without making a distinction. Such ontologization only applies to systems with very large populations, where statistical values are at stake and where the individual degrees of freedom are negligible. In all systems in which the individual degree of freedom is greater than the force acting against it, the momentum of living agents must be included. IOW, there is a fundamental difference between inanimate and animate matter, which must be ontologically taken into account in the organization of systems. Under this premise, the questions raised at the beginning are, how do swarms work and how do they orient themselves. To this end, we are looking for principles that we can formulate on a general level in terms of systems theory. Inanimate systems are passive, animate systems are active, both as a whole system and in terms of individual agents. Therefore, no intelligence can be attributed to an inanimate system. But what is the ontological principle of intelligence? If one considers intelligence not as an objective transcendent measure, but as the ability of a living system to optimally regulate its exchange with the environment, then this ability must be describable as a property.

This property itself does not lie in the individual agent, but results from the movement of the agents.

The basis is that there is a shift in the distances between the individual agents. This movement causes random structural changes and thus a local change in structure density.

The difference between a flock of birds and synchronous neurons is that intelligent agents appear in the flock of birds, while in the neural network of the brain they are non-intelligent agents. If the flock of birds is large enough, this difference does not matter, since then the logic of the system applies, i.e. the logic of the large number.

If the shift in distances and thus the change in density remains below a certain value, it has no consequences. If it exceeds this value, the local density acquires causal force.

What does the local change in structure density mean? A higher density means a higher probability of physical action and reaction. The flow of information is becoming more intensive, both in terms of communication technology and physics [4] [5].

A higher physical probability of action and reaction, in turn, means that impulses emanate from this local field to fields with lower density. The result is an information gradient that has a controlling effect, i.e. generates causality.

If it is true that all agents in a swarm are moving towards the center, then this center is the place with the greatest density of agents. It is an attractor, but it is constantly in motion.

Applied to the brain as a whole, we are dealing with a controlling system that integrates different stimuli and different thoughts. Such integration also takes place in every thought. I therefore refer to these systems as holonoms [6].

In contrast to a swarm, a brain is integrated into an organism and in this respect there is a (genetically) limited space that defines the system boundaries.

Self-organization in living systems can therefore be described as the generation of structure density that moves within defined system boundaries in space and time (the exceeding of which becomes pathological).

It controls the process of adaptation through concentrated movement towards compatible environmental conditions.

The more complex and differentiated the living system, the more complex and differentiated are the actions and reactions of the control, experienced as thinking and feeling.

Reflexivity in humans arises from the feedback of this thinking with social meanings and their subjective integration. In this way, thinking and feeling become self-confident.

Coherent behavior of living systems therefore entails decoherent structures, which in the next step experience a new coherence and then have a controlling effect on the system.

From this point of view, self-reflection in humans is a special case in which self-control is reflected and this reflection in turn is integrated.

The origin of the local density could be expressed as follows:
C = \sum_{i=1}^{n} \sum_{j=1}^{m} d(i, j) \cdot w(i, j)

C is the total amount of compression.
d(i, j) is the distance between nodes i and j.
w(i, j) is the weight of the edge between nodes i and j.

To maximize the total amount of compression, the distance between the nodes in a given area must decrease. This can be achieved by reducing the weight of the edges between these nodes or by moving the nodes closer to each other.

The distance between two nodes is determined by the number of edges between them.

The weight of an edge is usually determined by the bandwidth or latency of the edge.

The total amount of compression is calculated by the sum of the distances between all nodes multiplied by the weight of each edge.

The associated increased information density, expressed as an increased probability of reaction, could be expressed analogously as follows:

k = A \cdot e^{-\frac{E_a}{RT}} \cdot f(d)

k is the reaction rate constant.A is the Arrhenius constant.
E_a is the activation energy.
R is an environmental constant.
T is the kinetic energy of the particles in the system.
f(d) is a function that describes the dependence of the reaction rate constant on the distance between the nodes.

It seems to be a scale-free process [7], which of course depends on the type of agent and the medium.

The functionality of a brain manifests itself morphologically and forms the framework for further development.

Thus, the difference between an inanimate and an animate network is the self-movement and self-transformation of the animate system. It is active, its nodes are reactant. The way they are coupled is valence-based [8].

In my opinion, the network dynamics of a living system are based on the density of structure or information and thus form the ontological basis for all living things.

Thus, expressed in the two equations, we have an evolving structure density and thus a higher probability of causality occurring in a network.

It is possible that the power law distribution plays a role here: where the frequency of reactions is high, it increases exponentially and thus “sucks” causal force.

This could then be expressed as follows:

P(k) = \frac{k^{-\gamma}}{\sum_{i=1}^{N} i^{-\gamma}}

In this equation, P(k) stands for the probability that a node in the network has exactly k connections. N is the total number of nodes in the network, and \gamma is a parameter that determines the steepness of the power law distribution.

The equation states that the probability of a new node connecting to an already well-connected node is increased in proportion to the number of connections of that node. In other words, the more connections a node already has, the more likely it is to receive more connections.

Conclusion: simply put, it can be said that information-conducting systems with a dense package are hotspots for reactions and thus impose the type of information processing on their environment without destroying their own nature. This means that a nerve cell still works like a cell, namely protein-forming. However, as part of the brain, it is involved in neuronal information processing and contributes with its resources to a (for it) new type of network.

— — — — — — — — — — — — -

[1] Reynolds, C., Flocks, herds and schools: A distributed behavioral model. Proceedings of the 14th annual conference on Computer graphics and interactive techniques, doi:10.1145/37401.37406

[2] Rizzolatti, G., Sinigaglia, C.,: Empathy and mirror neurons: The biological basis of compassion. Frankfurt 2008

[3] Prigogine, I., Stengers, I.: Dialogue with Nature, Munich 1993

[4] Shannon, C., Weaver, W., Mathematical Foundations of Information Theory, 1976

[5] Parrondo, J., Horowitz, J. & Sagawa, T. Thermodynamics of information. Nature Phys 11, 131–139 (2015). https://doi.org/10.1038/nphys3230

[6] Stegemann, W., What is AGI? https://www.dr-stegemann.de/was-ist-agi/

[7] Cavagna, A., et. al., Scale-free correlations in starling flocks, Proceedings of the National Academy of Sciences, Volume 107, Issue 26 https://doi.org/10.1073/pnas.1005766107

[8] Stegemann, W., Self-organization viewed from the inside, https://www.dr-stegemann.de/selbstorganisation-von-innen/

--

--