Building Blocks of Artificial Biological Intelligence

Wolfgang Stegemann, Dr. phil.
Neo-Cybernetics
Published in
5 min readOct 23, 2023

I had shown in a short sketch [1] the difference between a machine model and a biological model of intelligence. While the machine model processes through layers according to the input/output model and delivers a product at the end, in the biological model stimuli excite a network.

We illustrate this with a cube with 1000 nodes, all connected to each other. Various sensors lead to this cube. When a stimulus is received from these sensors, ten nodes (neurons) are excited, let’s say, in the form of a pattern. This pattern corresponds to a single specific stimulus corresponding to one sensor or a combination of sensors.

With 1000 knots (10x10x10), 2.6 x 10^23 different patterns of 10 are possible. Each pattern represents a ‘thought’. In a pattern of 10, three neurons represent the dimensions of space and a fourth the dimension of time, the rest are afferent stimuli and efferent impulses. Almost any number of other parameters can be added by other properties of the pattern, such as its geometry, topology, etc.

Each of these patterns is assigned a meaning in terms of content and language.

Every ‘thought’ contains perception, storage and impulse all in one. There is no sequential process. This is the only way it appears and results from the sequence of many such ‘’thoughts’’, which in the end can lead to a motor action. Thus, there is neither backpropagation nor predictive coding.

If the congruent stimulus occurs again, it triggers the same ‘’thought’’. However, in reality, there are no congruent stimuli. It would also be disadvantageous, because if stimuli were equal, there would be no change, the brain would then be a mechanical machine.

In other words, the brain recognizes when stimuli are similar and within a tolerance. So it makes a structural comparison, for example according to the measure of similarity = dot_product(A, B) / (norm(A) * norm(B)).

Where A and B are vectors that represent the structures, and dot_product and norm are functions for the scalar product and norm calculation.

What is meaningfully separated in the analysis of the brain is in fact a single process. ‘Thoughts’ line up, overlap and nest each other, resulting in a process for the viewer that follows an immaterial causal chain. Ontologically, however, causality only results from the coupling of structures (and metastructures).

Such a model can be scaled up almost arbitrarily. Although it then becomes more complex, it never reaches the limits of information-theoretic capacity, as complexity is constantly reduced.

All learning means abstraction, no matter how simple the stimulus and pattern are. Whether it is a movement or a computational process, complex patterns are grouped together as metastructures, resulting in a new, more abstract structure [2]. It may be possible to show such structures using the mapper algorithm.

At this point, it must be emphasized once again that pattern formation is identical to thinking. There is no intermediary instance, no additional ‘operating system’ that makes thinking possible in the first place. It is comparable to the heartbeat. It’s not caused by the heart, it’s the heart that beats. Thinking is not caused by the brain, it is the brain that thinks. Since thoughts contain all aspects, from stimulus to memory to action, I will use the term holonome for this in the following.

A machine designed according to these principles does not emit data, it acts. In other words, it does not provide products such as translations, search results, etc., but simply acts independently.

Logical reasoning is the meaningful combination of two or more holonoms. Thinking becomes objectively logical when it uses concepts that are objectively recognized as logical within recognized axioms. Thinking can therefore be understood as a meaningful sequence of holonomes.

How does such a machine act? The basic activities are programmed. Algorithms link the individual holonomes to chains of thought/action and learn themselves, for example in the sense of neuroevolutionary algorithms, but not in a linear process-oriented sense, but in relation to the coupling of holonoms.

By using such algorithms, the system learns to search for holonomes that are capable of coupling. When the learning progress becomes exponential, the system itself gains the necessary experience in a very short time to be able to orient itself in the world. While the physical model [3] focuses on the individual neuron, the biological model focuses on neuronal structures, the individual neuron does not play a role.

The decisive step towards human-like intelligence and a consciousness-like state comes into play — in addition to the arousal principle and the holonomes — through two things:

1. Independent action in the sense of biological self-organization.
The basis is autocatalysis, i.e. actions are self-contained and always produce the prerequisites for further actions. There is no end to the process. In this way, metastructures are constantly formed, which join together to form larger clusters. In this way, a hotspot of informational density is gradually created. This acquires causal force and thus develops into the controlling ‘I’ [4].

2. Reflexivity. It is only possible if an instance (I) can mirror itself. It does this through feedback with the superego in which social norms are laid down. In humans, this is already programmed (DNA). The corresponding neuronal relationships in the brain can hardly be identified, but a machine can be programmed accordingly.

Conclusion: an intelligent machine designed according to the biological model has the following building blocks:

a. Patterns for stimuli defined in a close-meshed network as holonomes consisting of all the data required for an integrative action

b. Algorithmic control of the coupling of similar holonomes

c. Compression of data into metastructures

d. Comparison of metastructures with normatives.

To what extent such a machine is desirable remains to be answered. It is far inferior to any calculator in its ability to think logically. On the other hand, it does more in the field of intuition than any computer ever can and acts independently.

— — — — — — — — -

[1] Stegemann,W., Next step to AGI? https://medium.com/neo-cybernetics/next-step-to-agi-ca40944c73d9

[2] Stegemann, W., Consciousness as metastructures, https://medium.com/@drwolfgangstegemann/consciousness-as-metastructures-36ae906c36fa

[3] Stegemann, W., Neo-Cybernetics — from the physical to the biological model, https://medium.com/neo-cybernetics/neo-cybernetics-from-the-physical-to-the-biological-model-881580bff1ae

[4] Stegemann, W., Self-controlling living systems are hierarchical, https://medium.com/neo-cybernetics/self-controlling-living-systems-are-hierarchical-life-can-in-principle-be-described-as-an-695b1a83dcf2

--

--