Intuition Machine

Artificial Intuition, Artificial Fluency, Artificial Empathy, Semiosis Architectonic

The Semiotic Derivation Behind Quaternion Process Theory of Cognition

--

Introduction:

I argue that current metaphors for understanding the mind (e.g., the mind as a machine) are inadequate for capturing the complexities of human thought and communication, especially when considering advanced AI models. Instead, I propose a language-based, process-oriented metaphor, drawing on concepts from semiotics, logic, and dynamical systems theory. The goal is to develop a more robust understanding of AGI development, moving beyond a simple spectrum of “better” AI to a path defining specific rising capabilities.

Train of Thought

Let me note that one might think of the process as being derived from the ideas of thinking fast and slow as well as left and right brain thinking, which would result in four diagrams. However, this is not the approach I take.

Instead, their approach is based on the ideas of Charles Sanders Peirce, a logician from 1899. Peirce’s framework for organizing all knowledge is called “architectonic” and is comprised of three parts:

Speculative Grammar: This is the study of signs and their properties. Peirce identified 10 categories of signs, but I focuses on three:

  • Iconic signs, which resemble the things they represent
  • Indexical signs, which point to something
  • Symbolic signs, which are based on convention

Inference: This refers to three types of inferences that form a cycle of creativity and innovation.

  • Induction, which leads to the development of intuition
  • Deduction, which expands upon intuition to create new conclusions
  • Abduction, which is used to create new theories

Rhetoric, Discourse, and Methodics: This is a category that Peirce identified but did not fully develop. I identify three elements within this category, which they call “calculative processes”:

  • Dynamic processes, which have an attractor or a limit cycle, such as hurricanes or deep learning systems. These systems reach a stable orbit through iteration but can diverge or converge depending on parameters.
  • Code Dualty processes, which involve the combination of two processes, one discrete (fast fluent) and one continuous. The combination of these two processes gives greater permanence, similar to how DNA and cells work together.
  • Tensengrity processes, which require three elements, like life itself, which requires DNA, proteins and lipids. Tensengrity processes are structures that appear to be floating, which is a good metaphor for balancing in life. These structures combine two Code Duality processes.

I uses these three elements of “inference processes” to move from a fast empathy system to a dual code process to the tensegrity structure. Tensegrity requires two forces, compression and tension, which create a structure that is both rigid and flexible. I contends that this kind of structure, with its precision and adaptability, is what is needed for AGI.

Key Themes and Concepts:

  • Limitations of Existing Metaphors: The presentation critiques traditional metaphors for mind, such as “mind as a steam engine” or “memory as a hard drive,” noting their limitations in explaining the creative and fluid nature of human thought. The presenter states: “for centuries uh we relied on metaphors that were um like machine so we would model things our mind our thoughts like uh a steam engine our memories like hard drives reasoning like logical algorithms but we’re seeing the limitations of that model.”
  • Language as a Living Process: The core idea is to model understanding based on the dynamics of language rather than a static machine paradigm. The presenter proposes: “a metaphor language is living process…we should focus on language as a living process.” This serves as a “bridge object” for understanding complex cognitive processes.
  • Accelerating AGI Development: The presentation observes an accelerating curve in AI development, moving from “artificial intuition” (deep learning) to “artificial fluency” (Transformer models) and now towards “artificial empathy.” I note this progression has shorter time lengths and exponential growth.
  • Two Axes of Processing: I propose a framework for AI capabilities based on two axes:
  • Processing: Parallel vs. Meta-reflective (Sequential). This captures the difference between fast, unconscious processing and slow, deliberate thought.
  • Representation: Iconic/Analogical vs. Indexical/Clausal. This distinguishes between representations that resemble their referents and those that point to or imply other concepts.
  • The Evolution of AI Capabilities:
  • Artificial Intuition (Deep Learning): Characterized by fast, parallel processing and iconic representations.
  • Artificial Fluency (Transformer Models): Represents fast, fluent processing in relation to language.
  • Artistic Capabilities (Diffusion Models): Result from the overlap of deep learning and language models.
  • Creative/Critical Thinking: An overlap between fast and slow fluent capabilities.
  • Persuasive, Creative, Strategic Capabilities: Result from the addition of slow empathy.
  • Time Travel Development: A unique concept where AI is trained on future-oriented concepts, like programming, which are not present in biological systems. The presenter highlights this as an unusual approach: “what we’re actually doing is we’re taking something in the future and training these new kind of um agents so I call that time travel development.” The I note that a common theory is that embodiment is necessary for causality reasoning but that the AI bypassed this through programming language training.
  • Slow Empathy: This is posed as a new frontier for AI, characterized by reflective, analogical thinking. The presenter poses the question: “what is slow empathy so slow empathy must be something that must be reflective at the same time it’s analogical iconic.” The “aha” moment experienced during research or personal reflection are posited as possible examples.
  • Pattern Languages for Abstraction: The presentation proposes using pattern languages to capture tacit knowledge and develop new abstractions in AI. The presenter notes that “the whole idea of pattern languages is to basically take tacit knowledge in a field and make it explicit”
  • Peirce’s Triadic Framework: The presenter references Charles Sanders Peirce’s work to provide a logical framework based on three aspects:
  • Speculative Grammar: Analyzing the properties of different signs (iconic, indexical, symbolic). The presentation notes: “in semiotics there are three kinds of symbols or not three kinds of signs I’m sorry there is the iconic sign which is something that resembles something there’s the indexical sign I mentioned this earlier something that points to something and there is a symbol what’s a symbol something that is just by convention.”
  • Inference: Cycles of induction, deduction, and abduction.
  • Rhetoric/Methodics: Focuses on different kinds of processes including:
  • Dynamic Processes: (Attractors, limit cycles).
  • Code Duality Processes: Combining discrete (fast fluent) and continuous processes.
  • Tensgrity Processes: Balancing tension and compression forces, representing structures that are both rigid and adaptable. The speaker states: “you have these two forces that are in Balance to give you a structure that is both rigid and flexible adaptable and you want that structure in ag you want the the Precision the rigid rigidity and accuracy in AGI at the same time you want the adaptability.”
  • Cognitive Tensigrity: The overall model combines aspects of fast empathy, code duality, and tensigrity to create complex, adaptable cognitive systems.
  • Critique of AI “Spectrum”: The presentation critiques the common idea of a linear spectrum of increasingly “better” AI, instead, proposing a path outlining what is rising rather than just a better AI.

Implications and Potential Applications:

This framework provides a detailed way of analyzing current AI capabilities and predicting new ones. By understanding the interplay of different processes and representation methods, researchers can potentially design more robust and creative AI systems, as well as have better frameworks for considering AGI. Furthermore, the emphasis on slow empathy and abstraction suggests potential directions for future AI development.

Points for Discussion:

  • The nature of “slow empathy”: How can this be defined and implemented?
  • The use of pattern languages: Can this really be implemented?
  • The practical implications of time travel development: What are the ethical considerations?
  • The validity of Peirce’s model: How can we apply it to AI development?
  • The relationship between the abstract and the concrete: How do we reconcile the idealized models with real-world implementations?
  • Symmetry and the holistic layer: Is there more to be said about it?

Conclusion:

“Quaternion Process Theory” offers a novel and complex framework for understanding cognition and AGI, moving beyond traditional metaphors. It suggests a path for AI development that is not just about linear improvements, but rather about achieving a deeper understanding of the interplay between different cognitive processes. This could be transformative in developing new AI models.

Presentation video: https://youtu.be/k0X1eTZL4TU?list=TLGGxcFj_GRg-HQwNzAyMjAyNQ

--

--

Intuition Machine
Intuition Machine

Published in Intuition Machine

Artificial Intuition, Artificial Fluency, Artificial Empathy, Semiosis Architectonic

Carlos E. Perez
Carlos E. Perez

No responses yet