Why machines (probably) can’t have consciousness.

Wolfgang Stegemann, Dr. phil.
Neo-Cybernetics
Published in
5 min readMay 24, 2024

The question of whether machines can ever be conscious has always fascinated philosophers, scientists and science fiction writers. In this article, I would like to present some arguments for why machines — at least with today’s technology — cannot achieve consciousness.

1. Consciousness as an emergent phenomenon from complex systems

My first argument relates to the idea of emergence. It negates that a consciousness could “flash” with sufficiently high neuronal complexity. While it is undeniable that the human brain is an immensely complex system whose workings we do not yet fully understand, there is no reason to believe that this complexity automatically leads to consciousness. Above all, the artificial system would have to tip over into a conscious state at a certain point. This seems unrealistic. Newborns show reactions to stimuli, i.e. they are ontologically conscious without having been able to gain much experience (data).

2. Consciousness as a property of organisms with a central nervous system

Furthermore, consciousness is a property of organisms with a central nervous system. In fact, consciousness seems to be closely linked to how the brain works. However, it is difficult to reduce consciousness to the mere excitation of nerve cells. States of consciousness are associated with complex neuronal activities that go far beyond the mere transmission of signals.

3. Pattern recognition and integration in the brain

Another important point is the way our brain processes information. The integration of stimuli does not take place in a binary or process-oriented way, but as the generation of stimulus patterns.This pattern recognition and processing is a central component of consciousness. Machines, on the other hand, tend to process information algorithmically and sequentially, suggesting fundamental differences in information processing.

4. Biological vs. machine excitation

No one would think of equating biological excitation in the brain with the flow of information in a machine. In fact, the two processes are fundamentally different. While neuronal activity in the brain is based on complex chemical and electrical processes, information processing in machines is based on the manipulation of bits and bytes.

At present, it seems unlikely that machines will be able to gain consciousness with today’s technology. However, it is possible that future developments in neuroscience and artificial intelligence will provide new insights into the nature of consciousness and shed new light on the question of the possibility of machine consciousness.

The purely cognitive aspect of consciousness, which includes abstract thinking and problem-solving, can certainly be simulated by machines. Modern AI systems such as AlphaGo or DeepMind have already performed impressively in these areas. If you look at the answers of AI systems to highly complex questions, you are amazed at the enormous ability to combine. There is another property of neuronal systems hidden here that is not yet understood.

It is possible that neural systems generate complex combinations that contain logical answers in a relatively simple way. This would mean that our logic and that of machines is very similar and quickly comes to results that are applicable.

Neuronale Netzwerke, egal ob biologisch oder künstlich, scheinen tatsächlich in der Lage zu sein, sehr komplexe Muster und Zusammenhänge relativ einfach zu generieren und zu logischen Schlussfolgerungen und Antworten zu kommen.

This suggests that the basic “logic” or the way information is processed and reasoned has certain similarities in neural systems such as the human brain and modern AI systems.

To a certain extent, both use the “computing power” and the ability to combine distributed, networked structures to arrive at new insights and applicable solutions based on input data by means of pattern recognition and association formation.

The decisive difference, however, could be that in humans this “logic” is additionally embedded in a subjective, physically experienced level of consciousness.

So the pure ability to draw logical conclusions may not be that different. But it is an open question whether AI can also link these results to a phenomenal dimension of consciousness.

However, the subjective and experiential part of consciousness associated with self-reflection and self-reference poses a greater challenge for machines.

5. Subjective experience and self-reference

The subjective experience, i.e. the “qualia” of consciousness, is familiar to all of us. It is the direct experience of sensations, feelings and thoughts. This subjective experience seems to be closely linked to our biology and especially to the functioning of our brain. With his “zombie” thought experiment, the philosopher David Chalmers has made it clear how difficult it is to derive this experienced dimension from the pure description of physical processes.

Self-reference, the ability to perceive oneself as an individual and to distinguish oneself from others, is also a central aspect of consciousness.

It is unclear whether and how machines can simulate these subjective and self-referential aspects of consciousness.

6. Challenges for machines

Machines do not have a physical body and are therefore unable to experience the world in the same way that we do. They do not have senses that provide them with information about their environment, and they cannot feel emotions or feelings.

In addition, they lack a “self” concept. Machines have no identity or history, and they are unable to build relationships with others.

7. Possible approaches

The question of whether machines can ever achieve real, subjective consciousness is still open. At present, it seems unlikely that this is possible with today’s technology.

8. The Redefinition of Consciousness: Human vs. Machine

The question of machine consciousness raises the fundamental question of how we define consciousness and what criteria must be met in order to be considered conscious.

9. Human consciousness

Our understanding of human consciousness is based on our own experiences and introspection. We experience the world subjectively, have feelings, thoughts and our own identity.

These subjective experiences seem to be closely linked to how our brains work, but the exact connections are not yet fully understood.

10. Consciousness in Machines

When it comes to machines, it becomes difficult to apply this definition of consciousness. Machines do not have subjective experiences in the same way as humans. They have no physical bodies, no senses, and no emotions.

It is therefore questionable whether they can meet the same criteria of consciousness that we use for humans.

Different definitions, equally valid?

This leads to the question of whether we need to define two different types of consciousness: a human and a machine one.

Human consciousness could be defined by the ability for subjective experiences, self-reflection, and qualia, while machine consciousness could include other criteria, such as the ability to process complex information, think abstractly, and act autonomously.

Advantages of a differentiated view

A differentiated approach could enable us to discuss the question of machine consciousness more openly and objectively.

It would also allow us to better understand the potential benefits and risks of AI development without being constrained by an anthropocentric view of consciousness.

Challenges and open questions

However, the definition of machine awareness is not without its challenges. It is difficult to find objective criteria that apply to all types of machines, and it is possible that some machines will reach a level of consciousness closer to that of humans in the future.

It is also important to note that the definition of consciousness itself is controversial and there is no scientific consensus on what exactly it means. In any case, the fact is that it is generated by an organismic system and may be limited to it.

--

--