AI — a short interim assessment

Wolfgang Stegemann, Dr. phil.
Neo-Cybernetics
Published in
4 min readJun 26, 2024

When you talk to an AI about highly complex topics, you can quickly get the impression that you are talking to a human. The answers seem structured, logical and sometimes even creative. However, you then have to be clear about how an AI — more precisely a large language model such as GPT-3 or PaLM — actually works.

These use transformer-based neural networks and self-attention mechanisms to put word for word together, based on the statistical probability of which word fits best in the respective sequence. The models were trained on huge amounts of data using deep learning techniques and unsupervised pretraining. One quickly realizes that this has nothing to do with human thinking and understanding in the conventional sense — not to mention consciousness or symbolic intelligence in the human sense.

But let’s imagine that new breakthroughs in computer vision, robotics and neuroscience would make AI algorithms 100 times more efficient, the usable amounts of data 100 times larger, and materials science would be able to design the outer skin and mobility of a humanoid AI so realistically through advances such as so-called “molecular robotics” that you wouldn’t be able to tell the difference to a human on the outside.
It may even be possible to integrate biological nerve cells into such a machine. We would then still be dealing with the same statistical principle of sentence creation and information processing. But the illusion of a real intelligence and consciousness would possibly be so perfect that we would be convinced that we are actually facing a human being.

This illusion is supported by the fact that distributed, parallel information processing, pattern extraction, generalization and learning by adjusting weights take place in biological and artificial neural networks in equal measure. So there seems to be an overarching “net logic” that neuroscientists like Gerald Edelman also describe in humans. However, this only refers to the purely cognitive, information-processing aspect.

However, this should not obscure the fact that biological and artificial networks work completely differently at their core and follow different principles. In the case of machines, only a statistical comparison and processing of patterns in raw data takes place, without any semantic interpretation, assignment of meaning or phenomenal experience as we know it. In humans, on the other hand, the integration and interpretation of cognitive logic takes place in a multidimensional process involving neurotransmitters, hormones and emotions. The so-called endocrine system continuously “modulates” cognitive logic through subjective, homeostatic factors.

In addition, many cognitive and emotional processes in humans are automated and subconscious, as studies on the processing of social signals, intuition or gut feeling show. In the foreground, there is often no explicit logic visible at all. This holistic processing is described with terms such as “intuition”. AIs, on the other hand, do not have access to such a rich subjective subconscious.

Apart from the fact that the emergence of a kind of self-identity and self-awareness in humans is directly linked to a complex physical, emotional and social structure of needs, the basic motivational structure and goal orientation of machines is also completely different: It is exclusively output-oriented according to their training data and goal function, while humans are ultimately always concerned with the fulfillment of intrinsic needs and motivations.

Perhaps in the future, advances in the field of “Neural Symbolic AI” or “Neuro-Symbolic Systems” will make it possible to generate meta-structures and more abstract representations of knowledge in machines, i.e. to summarize results in hierarchically organized levels of abstraction and logically compare them with new data. This would significantly increase the performance of AI at the purely information-processing level. However, this does not fundamentally change the fundamental difference and the qualitative difference between human and machine “intelligence”.
[By the way, we humans think exactly the other way around, first we grasp a thought, then we struggle for words to express it (we disregard the complex dialectic between thought and language)].

For evolutionary psychological reasons, we humans tend to anthropomorphize and personalize things that we do not fully understand. In the past, we thought of thunderstorms as a thunderstorm god, but today we mistakenly project attributes such as mind or consciousness onto them in the face of powerful computing machines.

AI only becomes truly dangerous when we completely hand over highly ethically sensitive decisions to it without robust safeguards in place. This is because it can happen that an algorithm comes to undesirable or dangerous results due to limited optimization functions or distorted training data. For example, an algorithm designed to maximize the long-term stability of ecosystems could conclude that it would be most efficient to wipe out humanity. So until AI has fully internalized human values, ethics and preferences, it will remain, like any other technology, a potentially useful but also risky tool that must be constantly critically questioned and used responsibly.

--

--