Artificial consciousness?

Wolfgang Stegemann, Dr. phil.
Neo-Cybernetics
Published in
4 min readOct 25, 2023

First, let’s clarify the concept of consciousness. What is meant is not the medical concept (conscious vs. consciousless) and also not the psychological (conscious vs. unconscious), but the philosophical or better, the ontological concept. However, instead of getting lost in nomological discourses with unclear categories and arbitrary epistemology, we orient ourselves to what has emerged in the course of evolution, because after all, humans and their brains are a biological system.

So it’s about the question of why there is consciousness and what the meaning and purpose can be. So let’s describe consciousness as an organism’s ability to experience the world. The survival advantage, however, is likely to lie in being able to move optimally in the world, i.e. to be able to orient oneself. From this point of view, sensitivity would be helpful for orientation, because pain, for example, would be a warning of danger. Consciousness is therefore a property of a highly differentiated living being, which must and can necessarily orient itself in the world in a highly differentiated way. We attribute this trait or ability to living beings with a preferential central nervous system. Contrary to the panpsychistic view, there is no evidence of conscious behavior and experience for all other living beings as well as inanimate nature. The emergence of consciousness is obviously related to the excitation of sensors, the transmission of impulses to the brain, and the generation of electrical voltage and thus the excitation of nerve populations.

In contrast to binary, process-oriented information processing in artificial neural networks, nerves are excited in organisms. It is a completely different principle than in digital information processing. This is not about an output with which the process ends, but about the transformation and thus the representation of the external world in neuronal form [1]. The concept of form can be taken literally here. The representation is not one-to-one, but a neural pattern contains the necessary information about the outside world (including the inner world). These are the spatiotemporal coordinates as well as all other parameters that are necessary for orientation, such as color, sounds, haptics, etc. So there is a logical aspect of orientation that represents the logical relationship to the outside world. Logic here does not mean anything transcendent, but the neuronal representation resulting from the causal relationship between organism and environment in the form of a concrete pattern. There is also an excitation aspect, which not only creates a pattern, but subsequently triggers further electrochemical processes and thus makes an evaluation of the logical aspect, which means a subjective ‘assessment’ of the respective situation. For the sake of simplicity, we will abstract here from all the superimpositions and consolidations that make these processes highly complex and complicated. If we assign the logical aspect to cognition and the sensation aspect to subjective evaluation in the sense of an experience, we can come to the conclusion that an artificial replica of this orientation performance called consciousness is only possible on the part of the cognitive-logical aspect, while the experience aspect is not. It would be reserved for biomolecular processes and could not be simulated in silico.

Why do we assign the logical aspect to cognition? In a biological system, a separation cannot be made, but it can be carried out analytically from the point of view that machine consciousness is not biological anyway, i.e. it must be conceived differently from the outset. In this case, a logical relationship can be recreated by machine rather than a sensation. Logic plays a role not only in the topological positioning of the organism in the world, but also in the linguistic juxtaposition of symbols. This linguistic logic constitutes the world of meaning with which we orient ourselves in the world in an abstract way. Nevertheless, an evaluation on the part of the artificial system would be possible. It would refer to the conformity of one’s own actions with implemented norms, i.e. the counterpart that we know from the concept of the superego. By mirroring one’s own ego with the superego, the machine would develop a reflexivity that we call self-consciousness. To what extent such self-consciousness or consciousness in general is actually possible without the sensation aspect would depend on the attempt. However, such a model seems to presuppose the aforementioned excitation model and not the binary processing model of previous AI.

To the extent that such a machine would be capable of machine self-reflection, one could speak of a self-controlling system. In any case, it wouldn’t be a mere zombie. But this machine would not have emotions in the human sense.

— — — — — — — — — — — — — -

[1] Stegemann, W., Building Blocks of Artificial Biological Intelligence, https://medium.com/neo-cybernetics/building-blocks-of-artificial-biological-intelligence-2cdbfcd2b02e

--

--