Next step to AGI?

Wolfgang Stegemann, Dr. phil.
Neo-Cybernetics
Published in
6 min readOct 19, 2023

Intelligence presupposes consciousness, this presupposes subjectivity, and this is a property of every life. AGI as machine intelligence based on the principle of natural intelligence therefore needs subject status. The basis for this is an autopoietic system, which in turn is based on the principle of autocatalysis.

I had described three layers [1] that must be part of an autocatalytic system: 1. the program level (similar to DNA), the functional level (similar to the organ level) and the cognitive level (similar to the CNS). All three can only be separated analytically, in reality they are integrative. The basis of my model is holonomes. I use this term in reference to the physical term, which refers to the fact that a system is in a deterministic situation. This is characterized by the corresponding morphology (network). The basis of the holonomic is a network that is stimulated and changed by the excitation. The change is to be seen as an adaptation to the source of arousal. It proceeds as a change in the valence of nodes and edges, through which the network, i.e. the holonome, (re)aligns itself. From a physical point of view, a holonom can be called a state of arousal of a certain section of the network, from a psychological point of view it can be called a thought. Holonomas couple associatively according to their valence-based compatibility. The excitation of a holonome can be seen as a diffusion process [2], which results in a certain topology of excitation, which at the same time represents the memory. The coupling of many holonoms forms a supersystem in which the coarse-grained abstractions are bundled and thus cause a structural density (information density) that has a controlling effect. There are constant feedbacks between holonomes and the tax system, which constantly adjust both the relationship between the two and the holonomes. Holonoms or holonomic groups store their experience (abstractions) directly, i.e. without a separate external memory. They are, so to speak, the memory [3]. Depending on sensory input, they are sorted into auditory, visual, haptic and cognitive stimuli. All these groups are hierarchically networked with each other, the direction results from the first stimulus.
The supersystem is reflected in a superego system, which is implemented as a value system from the outside and thus achieves reflexivity. Due to the constant feedback between the two, there is a constant adaptation of the supersystem. A virtual ME is created.

Holonoms are constituted not only on a linguistic level, but also on a symbolic level, on which all sensory modalities are depicted, as well as on an action reflex level. Sensory stimuli and motor efferents combine within a holonome.

Therefore, the training of such a machine is multimodal. The linking of the different stimuli must be learned, as well as the link with motor actions.

The cognitive layer of the holonomic describes the way in which the different stimuli are linked. The functional layer tells us how nodes and edges change, i.e. how the material basis has to act. The program level tells how a stimulus processes from one node to another. It ensures the autocatalytic character of the system by ensuring that all processes run like a ring, that the results are a prerequisite for new processes, i.e. that there is no process logic based on the principle of input > black box > output at which the process ends.

Sensory stimuli are thus stored holonomically. The intensity as well as the coupling strength determine how strongly the stimulus currently conducts action or whether it is only weakly stored and thus lies below the line that we call the subconscious. It is then activated when needed by coupling with a strong stimulus. Holonoms are coarse-grained clusters that also couple coarse-grained, i.e. a certain coupling factor (K) is needed that is sufficient for coupling. This factor represents a certain topology of the holonom.

We know from synchronization research that clusters can synchronize, even if they oscillate asynchronously [4]. In this way, stored patterns can couple with newly perceived patterns and join together to form a common pattern that takes on the values of the stored, the perceived or a mixture of both, be asynchronous in parts or support synchronicity through asynchrony.

No human brain functions are transmitted or simulated, but principles of how the brain works are applied. A machine cannot develop natural intelligence, at best it can imitate cognitive natural abilities, consciousness is only possible without subjective sensation. This means that this type of consciousness has only a purely cognitive character.

An architecture of this machine ‘brain’ based on holonoms requires a completely new ‘morphology’. In networks, it is not data from a to b that is processed, but networks that are excited. This excitation takes place in a specific multidimensional topology. The material structure is therefore not linear, but consists of fine bundles of fibers.

The logic of action or thought arises from the linking of holonoms in such a way that the link itself must be seen as a holonom. It does not follow the binary logic, but contains a uncertainty principle. Logic is therefore something learned that only corresponds to a yes-no logic in special cases. The rule is ‘Maybe’. Fuzzy logic is probably the closest thing to this.

In the search for something new, the brain follows a logic that roughly corresponds to that of the NeuroEvolution of Augmenting Topologies (NEAT) [5], it searches in spaces of possibility and generates compatible variants there. In contrast to NEAT, the focus is not on the new, but on the possible, whereby the larger the space of possibility, the more new variants emerge, which is important for non-linear development. Thus, new things are not created according to Darwin’s logic of purely random mutations, but through constant adaptation to spaces of possibility via feedback loops.

To clarify: let’s consider a multidimensional excitation pattern within a multidimensional network structure. When this is compared with a pattern resulting from a current perception, a compatibility pattern emerges that consists of multidimensional similarities and differences. The differences between the two generate the logic that results in the subjective logic of thought and action from the comparison with the superego logic. Logic, from the point of view of the subject, is thus not objective logic in the sense of objective truth, but of subjective meaningfulness.

In this respect, such a system is far inferior to current AI systems when it comes to formal logic. For this, such a system can do what we call intuition, since it includes all levels of interaction with the environment.

And there is no black box, because all processes can be controlled.

You can see that a human-oriented AI has nothing to do with computer-based AI. An analogy between the computer and the brain is a bogus analogy that does not exist in reality.

Consequently, learning in such a system is completely different from current AI systems. It is more time-consuming, because every word, every meaning, every stimulus has to be ‘learned’ as a holonom. However, once this has happened, it can be multiplied at will.

It remains to be mentioned that the human brain constantly reduces complexity and thus not only conserves resources, but also permanently increases the amount of information [6].

The term AGI evokes the idea that there is a general intelligence in which humans and machines alike can participate. This is a fallacy. Machines are not living beings and never will be. Therefore, the terms human and artificial intelligence should be retained to indicate this difference. Strong AI would then be human-like, weak AI would be computer-based.

It remains to be mentioned that the technologies used here are still to be developed, both in terms of the material basis and the way in which such a system can learn.

— — — — — — — — — — — — — — — — — —

[1] Stegemann, W., Human vs. Machine Intelligence, https://www.facebook.com/wolfgang.stegemann.7/posts/pfbid02WJcADx8pDxiBzqFktBkjhNqoe5cx47xaceis7zsBm8ACnBHT8Qja1Lay1AQrJENol

[2] Karlbauer M., et al., Composing Partial Differential Equations with Physics-Aware Neural Networks, Proceedings of the 39th International Conference on Machine Learning, PMLR 162:10773–10801, 2022.

[3] Feldmann, J., Youngblood, N., Wright, C.D., et al. All-optical spiking neurosynaptic networks with self-learning capabilities. Nature 569, 208–214 (2019). https://doi.org/10.1038/s41586-019-1157-8

[4] Kassabov, M. et. al., A Global Synchronization Theorem for Oscillators on a Random Graph in: Chaos 32, 093119 (2022), https://doi.org/10.1063/5.0090443

[5] Sosa, F.A., Stanley, K.O., Deep HyperNEAT: Evolving the Size and Depth of the Substrate
Evolutionary Complexity Research Group Undergraduate Research Report
University of Central Florida, Department of Computer Science 2018.
https://drive.google.com/file/d/1VfsUd4iSsAcfSPcszYXZLrDa7-39yPEg/view

[6] Stegemann,W., What is complexity? https://medium.com/neo-cybernetics/what-is-complexity-b810ecc794ad

--

--