AI Corpus Callosum, Rapunzel’s Corpus Callosum

Geoffrey Gordon Ashbrook
5 min readSep 17, 2023

--

~2023.08.16 g.g.ashbrook

Corpus Callosum for for AI Architecture

Context: ‘a union of opposites’ vs. ‘externalized distribution’

Strength of Bridging:

  • Symbolic / Sub-symbolic
  • right brain / left brain
  • system 1 / system 2
  • All the flexibility of question-interpretation + and all the speed of ALU computation.
  • ‘good at identifying things’

A main theme or goal in the western tradition of alchemy was said to be the union of opposites.

The corpus callosum questions is infinitely open-ended in time in terms of what ways and trade-offs may be employed in specific instances and their generalizations for knitting together, pick your phrase, system-1 non-analytical and system-2 analytical processes, patterns, and aspects of a system, task, etc.

There is no single solution to speak of, but a hauntingly broad topic to make less-unintelligible.

Mo-tai-nai Invention: Philosophy, Practicality and Efficiency

While an AI can write a script, or create layers of software, to solve a problem that the AI is not suited to generativity-guesstimate the solution to, it is ‘not efficient’ to over and over create new solution-architectures to the same sub-operations that it has and will routinely use, often taking time and experimentation to get each new approach (to the same, or essentially the same, problem) correct.

By ~analogy, it would be as if a cpu needed to create a new ALU by an evolutionary process literally every time it carried out a calculation, then threw it away and started tabula-rasa with the next arithmetic operation, or if a compiler did not have standard ways of handling the same routine (“subroutine”) processes that it handles every time. Zen mind?

What is user-friendly for AI?

How does AI like to approach problems?

The Recipe Question: Locations in a Process

How does a given AI prefer to receive and accept parts of a process? While in some ways H.sapiens-human language trained LLM AI is just like H.sapiens-human, in other ways some tasks are easier, generating and debugging reGex expressions for example is easier for an AI, and counting a list of names is more difficult for an AI (in 2023). Tasks deeper in a Kasperov event horizon context are more difficult, but also ‘tactical’ single-layer tasks can be similarly difficult for AI and H.sapiens-human (e.g. starting from zero or 1 when getting items from some data-structure when lots of other details are buzzing around). In some cases a machine will prefer to exchange information in a reverse polish notation stack process, in other cases the machine prefers to handle natural language strings of the same serial expression and equations. Case by case, some answers may be surprising, and perhaps surprisingly interdigitated with the question of input structure (see the five trees) and the ‘state’ awareness of subroutine options.

Alice: “What size hex-wrench does this match in your toolbox?”

Bob: “I have a toolbox?”

The recent history focuses on two more technical areas

In neuro-sciences, system1 and system2 from Khanaman and Tversky,

in computer science the perhaps incorrectly named, symbolic and subsymbolic processes. Neither of these may be ‘correct’ in completely matching this problem space, but the general area of problems space is likely at least a partial overlap.

The history of the topic becomes more murky, with early-science alchemy devoting much attention to the union and interplay of opposites in a more philosophical way than popular after the enlightenment. and in the east an in/yo or yin/yang interplay of opposites,

Right and left brain was popular for a while, even the fascinating Julian Janes exploration of a Hamlet’s Mill style mythology parsing, and the eternal “Drawing on the Right Side of the Brain” (I heard the first edition was best) but I am not sure of the current state of the science of this (right and left).

Externalization et al

For example there are aspects elucidated by the context of Object-Relationship-Space based AI-architecture and OS studies which do bring in other contexts such as ‘externalization’ and multi-participant project elements, maybe including distributed, parallel concurrent and asynchronous aspects which are usually not in the above dichotomy contexts.

A guiding context here is more practicality and efficiency.

While an AI can write a script, or create layers of software, to solve a problem that the AI is not suited to generativly-guesstimate the solution to, it is ‘not efficient’ to over and over create new solution-architectures to the same sub-operations that it has and will routinely used (often taking time and experimentation to get each new approach (to essentially the same problem) correct).

By analogy, it would be as if a cpu needed to create a new ALU by an evolutionary process literally every time it carried out a calculation, then threw it away and started tabula rasa with the next arithmetic operation: zen mind. Or if a compiler did not have standard ways of handling the same routine (“subroutine”) processes that it handles every time. Re-use of modular functionality is a core theme in computer science.

Process Memory, Algorithmic State

This may also highlight another aspect or context of memory or state in systems, where a system can remember or not remember procedures and functions for operations.

Improvement

A side topic here is finding better ways, processes, heuristic, algorithms for doing things. The history of sorting algorithms may be a mini-case study in this overall phenomena of improvement.

Overall Process

1. the ability of the AI to not only write functions but to call on a library of existing functions/subroutines.

E.g. when handling different object/relationship types and or categories of types of systems (as in definition behavior studies).

Not just to ask it to use the tool, but to create a framework in which it can select an appropriate tool on its own based on what tasks it is set (see Mind-state for being persistently aware of, and share, available tools, provided or self-archived). This again can be done in either a ‘symbolic’ hard-coded ‘GOFI’ module or system, or a ‘classify the task’ sub-symbolic machine learning module could be used to try to accomplish this, as yet, speculative goal.

Re-use of modular functionality is a core theme in computer science:

  • systematizing types of questions
  • properties of Types of questions
  • target: a set of questions that can be approached with an established set of tools and methods
  • select a target space
  • ways of error-checking
  • not reinventing the wheel every time

About The Series

This mini-article is part of a series to support clear discussions about Artificial Intelligence (AI-ML). A more in-depth discussion and framework proposal is available in this github repo:

https://github.com/lineality/object_relationship_spaces_ai_ml

--

--