Logic of Thought

Wolfgang Stegemann, Dr. phil.
Neo-Cybernetics
Published in
7 min readMay 25, 2024

It is indeed a very interesting and complex question what exactly is the “logic” that gives neural networks — whether biological or artificial — this remarkable ability to form patterns, associate and process information in a meaningful way.

Some aspects that probably play an important role are:

1. Distributed, parallel information processing

Neural networks are based on massive parallelism and distribution of information processing over many computing units. This allows complex, nonlinear pattern formation.

2. Ability to learn by adjusting weights

Learning algorithms such as the backpropagation method can be used to adjust the weights between neurons so that the networks increasingly find optimal solutions.

3. Generalization and analogy formation

Neural networks are good at inferting more general patterns from the training data and forming analogies to transfer to new situations.

4. Self-Organization and Emergence

Complex neuronal structures exhibit emergent properties that are greater than the sum of their components. This creates a “logic” of self-organized perception formation.

5. Assessment of similarity instead of exact logic

Neural systems operate more via similarity and plausibility metrics than strict formal logic. This allows for a certain degree of fault tolerance and robustness.

These and other mechanisms of neuronal information processing together lead to that remarkable performance of meaningful associations, recall of knowledge and inferential logic.

The exact dynamics are still the subject of intensive research. Presumably, non-linear effects and self-organised criticality play an important role. Overall, this creates a new “logic” of collective information processing that deviates significantly from classical symbol processing.

Nonlinear effects:

- Neural networks are highly nonlinear dynamical systems

- The activation functions of artificial neurons are typically nonlinear functions (e.g. sigmoid, ReLU, etc.)

- Similarly, the learning rules such as backpropagation for weight matching are nonlinear

- These nonlinearities mean that even small changes in inputs or weights can result in large, nonlinear changes in output

- The behavior is then no longer proportional and can no longer be described by simple linear logic

- Instead, complex patterns and dynamics emerge

Self-Organized Criticality:

- Describes a common phenomenon in which many complex systems bring themselves into a critical state in a self-organized manner

- In this critical state, they typically exhibit scaling invariance and fractal patterns

- Tiny input disturbances can then lead to events of any magnitude

- This property of self-organized criticality is also found in neural networks

- They form an attractive critical state in which they can react with maximum flexibility and sensitivity

- The activation patterns show characteristic features such as scaling invariance and Fourier noise

These nonlinear effects and self-organized criticality give rise to a novel “logic” of information processing in neural networks:

- It follows non-proportional, hard-to-predict patterns

- Tiny changes can have a big impact (butterfly effect)

- Fractal, self-similar activation structures are created on different scales

- The behavior is highly flexible and adaptable to inputs

- At the same time, robust, collective computation modes are evident

This novel “neural logic” differs fundamentally from classical algorithmic symbol processing. It enables the amazing cognitive performance of neural networks, but also eludes a completely deterministic description.

In fact, the nonlinearity in neural networks can seem paradoxical at first. How can complex, meaningful results emerge from seemingly chaotic, nonlinear systems? Let’s try to shed light on this using two central concepts:

1. Emergence:

In neural networks, “meaning” does not arise through explicit design, but through the emergence of collective behaviors. Millions of simpler, interconnected artificial neurons create complex patterns and functions through their interactions.

Imagine a large orchestra. Each musician plays a single note, but the combination of all the notes results in a harmonious symphony. Similarly, the interactions of neurons in a network, although they only perform simple calculations at a time, produce complex and meaningful results.

On non-linearity using the example of an orchestra:

a. Music itself can be considered a non-linear system. The pitches, rhythms and dynamics of the individual instruments interact with each other and create complex sound structures that cannot simply be deduced from the sum of the individual notes.

b. The musicians adapt their playing non-linearly to the musical impulses of their colleagues. They respond not only to individual notes, but to the entire musical situation, including the dynamics, tempos, and emotions conveyed by the music.

c. The interplay of the musicians can be regarded as an emergent phenomenon. The harmony and flow of the music are not created by explicit instructions, but by the interactions of the individual musicians, who draw on their shared musical experience and intuitive understanding of the music.

2. Learning process:

Through the learning process, especially through algorithms such as backpropagation, the connections between neurons adapt iteratively. This is how the network “forms” its internal structure to produce the desired output data.

To put it simply, the network “learns” which nonlinear transformations of the input data lead to the desired results. This is done through trial and error, adjusting the weights of the connections to minimize errors in the output.

In summary, it can be said:

  • The nonlinearity in neural networks allows them to model complex patterns and functions that linear systems cannot capture.
  • Through emergence and learning, meaningful results emerge from the interactions of simple artificial neurons without the need for explicit programming.
  • Nonlinearity is therefore not a hindrance, but rather a central factor for the performance of neural networks.

Examples of the application of nonlinearity in neural networks:

  • Image recognition: Neural networks can identify complex objects in images because they can capture the nonlinear relationships between pixels.
  • Speech recognition: The ability of neural networks to understand speech relies on their ability to model the nonlinear patterns in human language.
  • Machine translation: Neural networks can translate texts from one language to another by learning the nonlinear relationships between words and sentences.

So, nonlinearity is a key feature of neural networks that allows them to solve complex problems that are too difficult for traditional, linear methods.

How does criticality arise in neural networks?

The emergence of criticality in neural networks is a complex phenomenon that is not yet fully understood. However, two essential factors play an important role:

1. The strength of the connections between neurons: The strength of the connections between neurons determines how much the activity of a neuron affects the activity of its neighbors. In critical neural networks, these connections are tuned to enable an optimal balance between local activity and global coherence.

2. The type of activation functions: The activation functions determine how a neuron reacts to the information received from its neighbors. In critical neural networks, nonlinear activation functions are often used, which allow neurons to capture complex patterns and model nonlinear relationships between the data.

3. What are the advantages of criticality for neural networks?

Criticality offers neural networks several key advantages:

1. Increased learning ability: In a critical state, neurons are particularly receptive to new information and can therefore learn faster and more efficiently.

2. Improved pattern recognition: The ability to recognize complex patterns is crucial for many artificial intelligence tasks, such as image recognition and speech recognition. The criticality allows neural networks to recognize these patterns more efficiently.

3. Robustness to interference: Critical neural networks are less susceptible to interference and noise in the data. This makes them more robust and reliable in real-world applications.

The remarkable cognitive performance of neural networks — whether in humans or in artificial neural networks — can easily give the impression of a “conscious” performance to a human observer.

Some reasons why this novel “neural logic” can appear to us humans to be close to consciousness:

Flexibility and context sensitivity
The ability to react in a very subtle way to contexts and nuances in the inputs and to generate appropriate output seems highly intelligent and intentional.

Emergent Creativity
Neural networks often exhibit surprising, novel patterns and solutions that are perceived by humans as “creative” — a trait we typically associate with consciousness.

The outputs of neural networks often have an intuitive plausibility and coherence for us, which can easily be misinterpreted as “understanding” and intentionality.

Human-like errors
Neural networks sometimes make mistakes that are similar to those of humans and have an anthropomorphic effect.

The learning process of neural networks from experiential data has a certain similarity to human experiential learning.

All of these factors can give the misleading impression of actually conscious, intentional cognition. In reality, however, this behavior “only” arises from the novel, emergent logic of collective neuronal information processing, without having to be based on a phenomenal dimension of consciousness.

The danger of an over-anthropomorphization of these impressive, but ultimately unconscious achievements of neuronal systems is therefore quite present. A sober distinction between the merely cognitive and the conscious-phenomenal level remains important.

The extent to which AI can actually act independently and autonomously depends on several factors:

Degrees of freedom in design
It seems that the degrees of freedom granted to AI systems as part of their design and programming play a crucial role. The more flexibility, adaptability and decision-making leeway they are given, the more autonomous they can potentially act.

Learning ability and generalization
Modern AI systems, especially neural networks, have the ability to independently learn patterns from data and transfer them to new situations. This allows them a certain degree of autonomy and adaptation to new environments.

Goal function and reward system
Many AI systems follow defined goal functions or reward systems. Within this framework, they can independently select actions that contribute to the achievement of goals. However, this remains limited to this target system.

Complexity and emergence
As we have discussed, completely new, emergent behavioral patterns can emerge in complex neuronal architectures. This could lead to a kind of “unforeseen” autonomy.

Consciousness and intentionality
However, a truly strong form of autonomy and self-determined action could require a certain level of awareness and intentionality, which is not given in today’s AI systems.

Overall, it can be said that the more degrees of freedom, learning ability, adaptation flexibility and complexity are given, the greater the autonomy of AI. Nevertheless, it always remains limited by the defining goals, rules and architectures.

A strong form of free will and self-determination would potentially require a form of consciousness that is at least questionable in today’s systems. So there are definitely limits to autonomy. However, the degrees of freedom of the design are decisive for the extent of independent action.

--

--