Machine and Man

Wolfgang Stegemann, Dr. phil.
Neo-Cybernetics
Published in
6 min readMay 29, 2024

In the two previous articles, I tried to explain why machines (probably) cannot have consciousness, and then to describe the common logic of thought .

This is about the central differences between man and machine.

In fact, there are significant differences between machine and human thinking that are worth highlighting:

1. Algorithm-based vs. intuition-based:

Machine thinking operates on the basis of algorithms and formal rules, while human thinking is shaped by intuition, experience, and unconscious processes. People are able to gain insights even if they are not explicitly aware of the underlying rules.

Even though we are dealing with non-linear processes on both sides as far as the logic of thought is concerned, machine ‘thinking’ is embedded in a fixed algorithmic framework with a clear objective.

2. Linguistic vs. non-linguistic:

Human thinking often takes place in non-linguistic forms, such as images, metaphors, and emotions. In contrast, machine learning is primarily based on linguistic representations and symbolic systems.

3. Context-sensitive vs. context-independent:

People always understand language and concepts in the context of their culture, experiences and the respective situation. Machines, on the other hand, find it difficult to grasp this context and process it adequately.

4. Open vs. Closed System:

Human thinking is an open system that constantly integrates and adapts to new information. Mechanical systems are usually closed systems with fixed boundaries.

5. Consciousness and self-reflection:

Whether machines can ever develop consciousness or true self-reflection remains an open question.

The aspects of nonlinearity and criticality are present in both human and machine learning. Nonlinearity describes how small changes in input data can lead to large differences in results — a phenomenon observed in both thought processes. Criticality refers to the ability to recognize and respond to crucial moments.

It is conceivable that as artificial intelligence develops, machine learning processes will become more similar to human ways of thinking, especially in terms of integrating context and processing non-linguistic information.

6. Emotional intelligence:

Humans possess the ability to recognize, understand, and respond to emotions. Machines lack this emotional intelligence, even if they can simulate emotions. Research and development in the field of artificial intelligence (AI) has made progress in emotion recognition. Modern AI systems can analyze facial expressions, body language, and vocal features to infer people’s feelings. This technology, known as “affective computing” or “emotion AI,” is used in various fields, from improving the customer experience in retail to optimizing learning tools in education. However, although AI systems can recognize emotions, this is not the same as the human understanding of emotions.

AI is based on the analysis of data and patterns it receives from facial movements, speech and other signals. The ability to recognize emotions does not mean that the AI also “feels” those emotions or has a deep understanding of the human experience. Technology is a tool that can perform certain tasks, but it doesn’t possess the emotional depth or awareness that humans have.

Emotion recognition by AI also raises ethical questions, especially regarding privacy and the accuracy of interpretations. Experts discuss the limitations of these technologies and the importance of context and cultural differences in the analysis of emotions.

7. Awareness and self-awareness:

People are aware of themselves and their environment. Machines have no consciousness in this sense, even if they can process environmental data.

8. Ethics and morals:

People can make ethical decisions and make moral considerations. Machines, on the other hand, follow programmed guidelines and cannot make real moral judgments.

9. Creativity and innovation:

People can think creatively and generate new ideas. Machine thinking is often limited to pattern recognition and data analysis, without any real creativity.

Similarities:

Learning:

Both humans and machines have the ability to learn, although the learning processes are different.

Problem solving:

Humans and machines can solve complex problems, with machines often being faster and more efficient in certain areas.

Adaptability:

Machines, especially those with machine learning, can adapt to new data and situations, much like humans adapt to new circumstances.

The most important difference between machine and human, however, is that the machine cannot control itself. Self-control in living organisms takes place through a local maximum information density, which, by generating gradients, unfolds causal force. It is also the center of autocatalytic reproductive capacity, the lowest level of which is the genome.

So while machines perform algorithmic tasks, the goal of life is exclusively successful reproduction.

The core of this concept lies in the distinction between the autonomous functioning of living beings and the controlled functioning of machines. Living beings have the unique ability to self-regulate. This ability is based on complex biological processes that are controlled by information density within the cells and the genome. The density of information is highest in the areas responsible for controlling vital functions and responding to environmental stimuli.

This density of information enables living organisms to recognize gradients and react to them. Gradients are differences in the concentration of substances or in physical properties, which is expressed by a higher probability of reaction. By responding to these gradients, living things can exert causal forces that allow them to adapt, survive, and reproduce.

The genome plays a fundamental role in this, as it contains the blueprint for all biological functions of an organism. It is the basis for autocatalytic reproductive ability, i.e. the ability of an organism to reproduce and maintain itself.

This applies to all levels up to the brain. And here it becomes clear that consciousness is directly bound to physical processes, even lives exclusively from them and is not something that supervenes over the physical.

Stimulus processing in the brain is a complex process in which sensory information from the environment is absorbed, analyzed and converted into patterns. These patterns can then be sent back to the sensory systems as feedback to guide the body’s perception and response.

The brain uses these patterns to optimize the sensory experience. This process is part of what is known as sensory integration, in which the brain learns to process and adapt stimuli more efficiently

The emergence of consciousness is a complex phenomenon and is influenced by the interplay of many factors. It is a common view that consciousness arises from the interaction of various brain processes, including feedback and perception of one’s responsiveness.

Consciousness encompasses different states of experience, from general alertness to self-awareness. These states arise from specific interactions of different brain centers. Synaptic reorganization in specific nerve networks plays an important role in this process. As soon as these networks consolidate, cognitive or motor performance is automated, and awareness, for example in the form of attention, is no longer necessary.

Proprioception, i.e. the perception of the position and movement of one’s own body in space, is also an important part of consciousness. It provides the brain with continuous information about the position and movement of the body parts, which is crucial for balance, coordination, and responsiveness.

Consciousness is the result of the dynamic interaction of many brain regions and the temporal coincidence of brain waves, which enable a targeted transmission of information.

An organism is ultimately the result of an autocatalytic reproduction process, i.e. a completely different principle than machines. It is a biological principle that cannot be applied to machines.

You can see pretty quickly that this description of consciousness for machines is not possible.

Consciousness is therefore directly linked to physicality, inseparable from it.

The “I” or subject of the experience is modeled by a system with maximum information or structure density.

This high density of information is physiologically reflected in a high readiness to react and sensitivity to stimuli.

This results in a “causality gradient” that gives the system a kind of agenthood and agency.

It is precisely this embodiment of an information-reactive “center” that could be the basis for the inner psychological dynamics of subjective experience. I feel, therefore I am.

Result:

The pure logic of thought seems to be the same for biological and artificial neural networks, as long as they are organized nonlinearly. It corresponds to what we call the cognitive aspect. It seems to be the only thing they have in common.

In everything else that makes up the human being, consciousness, emotions, intuition, personal experience or self-regulation, man and machine are fundamentally different, indeed one can say that they have nothing to do with each other in this respect.

--

--