Modeling Participant Architectural Learning in Five Trees Plus Mind-state

Geoffrey Gordon Ashbrook
5 min readSep 17, 2023

--

2023.09.10–17 g.g.ashbrook

AI Architecture and Architectural Learning

AI Externalization and Project-Tasks

Along with the themes of Architectural-Learning (vs. model-training) and …

Let’s look at some of the details of what kind of formal processes and sub-tasks are involved (and assumed) when asking or relying upon an AI or machine or automated or whatever participant to take up a task in a project.

Using the Woz-Office-Coffee-Bot, we could make a minimal task example that will nevertheless show how implicit a single project-task may be additional if hidden tasks or task-aspects, which nevertheless change the learning/skill/ability requirements for who/what can accomplish this task (and H.sapiens-humans are not oriented towards thinking of the world in terms of generalized-STEM project-tasks in a value-function-meaning non-system-collapse set of interlocking contexts). For example, getting a list of preferred teas, finding out the prices and delivery dates, making a purchasing proposal, and reporting on the estimated cost. How this actual project task in externalized signal, steps, documentations, and actions that depend upon currently unknown information, differs from a single guesstimation of the overall outcome is crucial. Indeed, while there are cases where being able to guesstimate the overall outcomes is either necessary or advantageous, there are also cases where by virtue of trade-off being better at the project-management skills of the sub-parts is much more important. And where these two sets of skills and fitness profiles differ represents a very interesting fork in the road for where resources for investment and development should find baskets for their eggs.

Architectural Learning vs…

Beyond stating and conceptually illustrating that ‘a task’ may appear to be the same as another but actually be quite another thing altogether, let us try to dip a bit into how one might model such a practical course of action.

Five Trees

1. In / Input / Context / Instructions

2. Out / Output / (Analysis type and details)

3. Content (subject-matter)

4. Process / Project-Tasks

5. Tests / Feedback

Plus:

Mind-State: (or some kind of machine-state):

Notes

1. As with definition-behavior studies and object relationship space architecture studies, here there is no illusion of or allusion to a neat singular tidy solution.

2. Two sections, content and process, are possibly largely overlapping, but case by case hopefully a practical answer to questions-needed-to-be-answered can be found.

3. “Participant” vs. AI:

While this may be an erroneous quagmire of semantics, the idea is to generalize task-doing regardless of the rank, status, id, personhood-ness, or whatever else, (sniffing dogs?), of who-or-what does the task. In this discussion, AI is the focus…but so is AI and H.sapiens-human team cooperation…and we still need to get around to describing H.sapiens-human learning for tasks which we have avoided doing for a few million years.

4. In: For a given task, assuming no input-tree or pipeline may be a large assumption, perhaps reminiscent of the ENIAC-paradox, where given a sufficiently structured input a problem could be done on the ENIAC, but that assumption fits squarely in the feasibility-of-method aspect of computer science: unstructured inputs are a stubbornly difficult problem, yet (if unable to count) subsymbolic (including generative) AI is a powerful tool for this task. As a whole, when put into a full project context, in the larger balance of what is done by what kind of deep-ai or GOFAI-ai, deep-ai may be more intensively used on the input-tree side of the task, and in some output-tree reporting depending on what that is, when the content-tree and testing-tree locations are already known to be core-basal highly-conserved STEM areas such as known math calculations.

5. The term “Learning” here will be used for “architectural learning” as an abbreviation of a larger set of interrelated and largely synonymous (and to be honest largely not fully understood) terms: training, learning, skills, abilities, proficiency, performance, achievement, literacy, memory, mind, problem solving ability, cognition, calculation, benchmarks, baselines…etc. Listening out all the terms (assuming such a list of related terms is finite, which it may not be) (and where each term is no doubt controversial and hotly opposed with much adieu in its own right) repeatedly would make even a simple sentence very hard to follow. And depending on parts of speech and common phrases, other phrases may be used as well.

Even for the same task, model learning and architectural-learning, task process in one case and task process in a seemingly synonymous case, may substantially differ.

Area Details

1. In

1.1 The ENIAC Paradox: Given enough structured input, even the original ENIAC might be able to do the computation, but given lingering ambiguities in input clarity, the same problem scope-creeps into requiring exponentially more resources.

1.2 Hofsteder’s Gap

1.3 Context: Structured-ness of input, Instructions.

1.4 cli vs. fancier ui for the same task

2. Out

2.1 Classification

2.2 Analytical Processes and Problem Solving

2.3 Subroutines and AI-ALUs

2.4 Process-Step-Analysis Reporting

2.5 Protocols and Requirements, Externalization

3. Content

3.1 categories of Types of Systems

3.2 Object Relationship Spaces

4. Process

4.1 Cutups

4.2 Herding Cats

4.3 Following Breadcrumbs

5. Tests

5.1 A Tree of Tests

5.2 Unit Tests

5.3 Fallacy Criteria

Mind State:

1. Memory

2. AI Mind-state & Project Autonomy

- Goals

- Tasks

- Tools

- Questions

- Overall Priorities and Values

3. Time:

- “What time of day is it?”

4. Loops, Memory, Tasks, Resources

~repl loop:

to remember that you can look things on line.

short and long term layers:

a two layer loop, usually shallow loop, periodically a longer loop…

2. Decentralized Mind-State: network-loops

Learning, Skills, Ability: Not a 6th Tree

In a sense there is another related tree here, to understate the matter: Learning, skills, and abilities: What can your system do? However I recommend that we restrain ourselves from merely collecting a sack full of trees because we are tree-collectors, and pause to remember our goal: Modeling Learning

In some other contexts architectural learning may well be a tree, but in this context, we are defining learning in the context of a specific project task as the other five trees plus mindstate. Though this could be a strange exception, we probably as usual want to avoid adding ‘tree of all possible architectural-learning forms’ to its own definition: as amusing as positive feedback loop explosions can be.

Questions

  • General System Collapse
  • Comparing with a Non-Architecture Baseline
  • primary task example: AI doing math problems
  • secondary: ethics in projects
  • participants: H.sapiens-humans, dogs, pigeons

Also see

- AI ALU Corpus Callosum / Rupunzel’s Corpus Callosum

- AI Bodies & Minds

- Modular Problem Spaces

About The Series

This mini-article is part of a series to support clear discussions about Artificial Intelligence (AI-ML). A more in-depth discussion and framework proposal is available in this github repo:

https://github.com/lineality/object_relationship_spaces_ai_ml

--

--