Keywords and Notes on a Humanoid AI (Part 4)

Wolfgang Stegemann, Dr. phil.
Neo-Cybernetics
Published in
5 min readAug 3, 2024

This is the continuation of parts 1, 2 and 3.

On the aspect of information processing in our hypothetical 3D network system

  1. Stimulus absorption:

- Various sensory stimuli of a tree (visual shape, color, texture, possibly smell, sounds of the leaves, etc.) would be recorded by the system.

2. Initial processing:

- These stimuli would first be processed in various specialized areas of the network (e.g., visual cortex equivalent, auditory region, etc.).

3. Integration in the 3D network:

- The processed information would then be merged into the 3D network and create a specific “figure” or structure.

- This figure would be a complex, multidimensional representation of all aspects of the perceived tree.

4. Pattern comparison:

- The system would compare this newly created figure with already existing structures.

- Similarities to earlier “tree” experiences would be recognized.

5. Concept assignment:

- If the similarity is strong enough, the system would associate this new character with the already existing concept of “tree”.

- If it is a new tree species, a subcategory or variation of the “tree” concept could emerge.

6. Coding and storage:

- The specific 3D structure created by this tree would now be linked and coded with the abstract concept of “tree”.

- This coding would include both the general “tree” properties and the specific characteristics of that individual tree.

7. Dynamic adjustment:

- With each new tree experience, the general “tree” representation in the system would easily adapt and refine.

8. Contextual embedding:

- The new tree experience would also be related to other concepts (e.g. “nature”, “forest”, “wood”, etc.), which further complexes the networking in 3D space.

9. Abstraction and generalization:

- Over time, the system would develop an increasingly abstract and generalizable representation of “tree” that encompasses different tree species and experiences.

This process would allow the system to not only recognize and categorize individual trees, but also to develop a deeper “understanding” of the concept of “tree.” The resulting 3D structure would thus not only be a static representation, but a dynamic, context-sensitive and experience-based coding of the tree concept.

This type of information processing and encoding could allow the system to develop concepts in a very flexible and nuanced way, which in many ways resembles human cognition.

This is pure information processing, i.e. the cognitive aspect of consciousness, which in itself does not mean consciousness.

1. Functional equivalence vs. phenomenal consciousness:

- The essential functions of human consciousness have been replicated, without this necessarily leading to the emergence of consciousness in the phenomenal sense.

- This corresponds to the philosophical concept of the “philosophical zombie” — a hypothetical entity that behaves like a conscious human being in all respects, without actually having subjective experiences.

2. The “hard problem of consciousness”:

- This directly touches on the “hard problem of consciousness” formulated by David Chalmers — the question of how and why we have subjective, qualitative experiences.

- It is possible that we can replicate all cognitive functions without generating the phenomenal consciousness.

3. Functional vs. biological perspective:

- From a purely functional perspective, it could be argued that a system that possesses all the cognitive abilities of humans, including self-reflection, should be considered “conscious”.

- However, from a biological point of view, consciousness could be tied to specific biological processes that are not present in an artificial structure.

4. Ethical and practical implications:

- If an AI possesses all the cognitive abilities of humans, including self-reflection, the question arises as to whether we should treat it ethically differently than a biologically conscious being.

- In practical terms, such a system could potentially perform all the tasks for which we consider consciousness necessary without actually being conscious.

5. The role of feeling:

- I maintain that it is not necessary for the functions that the AI really “feels”. This raises the question of whether feelings and emotions are a necessary component of consciousness or whether they should be considered separately.

6. Testability and verification:

- A big problem in this discussion is the question of how to test or verify the presence of consciousness in an artificial system at all.

- The Turing test and similar approaches ultimately only test behavior, not inner experience.

7. Philosophical perspectives:

- Some philosophers argue that consciousness is an emergent property of complex systems and therefore could arise in a sufficiently complex artificial system.

- Others argue that consciousness is a fundamental property biological and therefore could be present in some form in any sufficiently complex system.

Let’s look at the advantages and disadvantages of humanoid intelligence:

Advantages of humanoid intelligence

1. Creativity:

- Creativity is one of the main strengths. Humanoid intelligence can make unexpected connections and generate new ideas.

2. Context Understanding:

- Better understanding of complex, ambiguous situations that require human experience.

3. Emotional Intelligence:

- Ability to understand emotions and respond appropriately to them, which is important in social contexts.

4. Adaptability:

- Flexibility in new, unforeseen situations.

5. Intuition:

- Ability to make decisions based on incomplete information.

6. Ethical reasoning:

- Better understanding of ethical dilemmas and moral choices.

7. Interdisciplinary thinking:

- Ability to combine concepts from different fields.

Disadvantages of humanoid intelligence

1. Limited computing capacity:

- Slower processing of large amounts of data compared to specialized AI systems.

2. Susceptibility to errors:

- Susceptibility to cognitive biases and logical errors.

3. Inconsistency:

- Decisions can be influenced by factors such as fatigue or emotions.

4. Limited storage capacity:

- Less efficient in storing and retrieving large amounts of information.

5. Subjectivity:

- Decisions can be influenced by personal experiences and prejudices.

6. Slow learning:

- Slower acquisition of new skills compared to machine learning algorithms.

For specific tasks

- Specialized AI systems are often more efficient at well-defined, repeatable tasks such as data analysis, pattern recognition, or optimization issues.

- Humanoid intelligence is better suited for tasks that require creativity, emotional understanding, ethical judgment, or the ability to navigate unstructured environments.

The ideal solution could be a combination: harnessing the strengths of humanoid intelligence (such as creativity and contextual understanding) combined with the benefits of specialized AI systems (such as fast data processing and consistency). This could lead to systems that are both creative and highly efficient.

--

--