Calculating Tea for AI: Advocating for Architectural Learning

Geoffrey Gordon Ashbrook
35 min readOct 1, 2023

--

1+1+1+1+1

2023.09.18–30 Geoffrey Gordon Ashbrook

1+1+1+1+1: AI and Five Cups of Tea, and More on Architectural Learning

Moving from not-being-able to do something to being-able to do something is the topic we are talking about here. I will use ‘learning’ as hopefully an understandable, if odd, word to describe this. The words most people use to talk about topics involving skills and learning are a bit broken-up. There sort of is no over-all popular version of this topic, so some words are only used in some situations. This traps us in silos of separate topics, which won’t do at all I’m afraid. We need a more ~general way to talk about skills and learning clearly. At first talking about doing a simple task may seem very easy; even a simple task can quickly becomes not so easy. As the saying in AI-research goes: ‘Easy things are hard.’ (But at least it’s fun!)

We cannot now say what the best terms, descriptions, nicknames, and jargon will be, nor the key precise concerns at each future time, but we can start piecing together parts of this landscape and describing them as best we can.

With the term “Architectural Learning” I am going to try to describe, and argue for the importance of, a context for training the skills and abilities of a ~system in a still measurable and clearly definable way that is a compliment to but not quite the same as the more traditional view of a skill or task in isolation. There are many facets of this isolation, including the context but we will focus on an isolation of implementation and how that task might be done.

We will see how ‘a task’ one which we usually think of as being ‘just that task,’ somehow spreads out into quite different tasks. And in the same way, the learning-training, skill-building, needs will shift as well. For example some simple version of a task, and the “learning” needed to gain the ability to do that task, might be one straightforward thing for, say, a computer function to do. But if that sort-of-the-same task is moved into a project-task-context, all of a sudden that same task, and doing that task, and learning to be able to do the task, shifts and becomes not the same as the original simpler context.

In talking about “architectural learning” we will discuss the space of wholes and parts in architectures in, and for, applied projects, as opposed to cleanly defined or purely abstract functions, as in a server-api (or program cli).

Part of the curiosity of the conjunction and disjunction between logic, math, and computer science, is that often there are no data-types in traditional logic and math, and so you can chain together and say such is such just by declaring it so. But in real projects you often run into formal and formality issues that assert themselves into those clean chains of logic and math and say “Hold on, what exact data type is that? That’s not the same data type as this other thing over here, so you just can’t do that! And you cannot move ahead until you fix it or find a way around.” This is one small example of how what may start as a very simple process can become much less simple.

‘People’ (perhaps H.sapiens-humans specifically, or perhaps minds more generally, which would be interesting) sometimes like to think of tasks and problem-spaces in a fictional, so-called ‘symbol-manipulation’ space. And the history of trying to connect patterns in abstraction and patterns in real life events has been quite an adventure (one that we are very much still in). This is one of many ‘elephants in the room’ which will greatly shape what is done around this topic.

In ‘pure’ math-logic and often or sometimes computer-science space, it does not matter how many pieces you break something into, or if you break it into any. But in real projects, projects with what I am calling ‘architectures’ here, it makes all the difference in the world.

1+1+1+1+1 = 5

There are many functions and devices that can output a correct addition answer given the input of five 1’s, but how can we use those functions in an architecture project-task space of problems, rather than in a pure-math-logic space of problems?

In ‘pure-land,’ or ‘game-land,’ or ‘symbol-manipulation-land,’ there are, or sometimes can be, no types of 1s. There are no places where 1s are. There is no memory-ownership of 1. There is no memory-safety-ness of 1. There is no auditing of 1. There is no plausible statistical anonymity of 1. There are no court-orders to publish the identity of 1. There are no data-hygiene procedures to prevent 1 from interacting with other things in undesirable ways. There is no dynamic inventory status of 1. There is no schedule around 1. This is something of an analogy because most of those example come from the difference between ‘pure math’ vs. math in computer-science. And while the path was sometimes bumpy, after a few hundred years of automating math steps we are now generally confident in thinking that a computer function and pure math function are very close relatives, and usually close enough to be considered the same kind of thing. Here we will be taking another step, looking how both pure-math and computer-science-math get applied in a project-task-process space of problems. In particular we will be looking at cases where what is doing that task may be not our familiar simple computer function, but an ‘archetecture’ in a space of participants carrying out that project, and working to get the job done properly.

Let’s say there are six participants, all of whom have some or none of these 1s, and these participants all have tasks such as knowing what the overall task is, or auditing the task, or reporting the final result.

On one level we still have the same problem 1+1+1+1+1 = ?, on the other hand the sum-the-numbers task is not quite the same and we have more tasks as well. What kinds of tasks are these? Are these just more steps of the same general type, ones that a quick small work-around can account for? Let’s take a look at some examples.

“Woz-Bot, how many cups of tea will you make?”

Let’s go back to the wonderful ‘Woz-test’ for AI, sculpted by the legendary Steve Wozniak, who conjectured that a good test for AI is having it make a cup of coffee. “Easy things are hard.”

Part of what I love about the Woz test is that it contains a lot of built-in flexibility, so you can make less challenging or more challenging versions of it (which no doubt will make great real-tests for AI systems).

The modified version of a Woz-Test that we will use here envisions a Woz Office Coffee (or Tea) Bot, who makes sure that people have something to drink.

One of the first things that may come to mind might be that now ‘1’ is not completely abstract and the same as every other 1. It would be a lucky chance if everyone wanted the same exact cup of tea, and not at all something that would be safe to assume.

Not only do we have five, potentially, different orders for tea, it is very likely that at some point these requests will all take different forms: by voice, by text, by hand-written-note, by picture, by email. Before, we only had to worry about whether this or that ‘1’ was a character or an array or a float or what sized (or signed) integer (and just that set of options is often quite a maze requiring lots of mistakes and testing…and escape-characters). But now, ‘type’ has gotten a bit out of hand. We cannot simply say 1 audio stream + 1 mp3 file + 1 mp4 video + 1 email + 1 text message + 1 api json-object = five identical cups of black tea with cream and one lump of salt…and expect any simple function to be able to make any sense of that at all. And in real life planning, coordinated decisions are scattered over time, full of people changing their mind, misunderstanding the question, being out of touch, etc. Oh dear.

Yet, no need to lose our heads; this still is a very concrete task that is entirely possible to do. While what seemed like a simple formality of connecting the obvious world of 1s counted up on your hand or on a piece of paper, and the simple and obvious world of counting teacups on a table, is starting to grow into a long, dark, and baffling, adventure that we cannot see our path through, just remember that this is still a concrete task. You can count teacups on a table; this is a task that can be confidently done. We must simply frame the processes and what is happening in a clear and practical way.

Can a purely-passive-reactive system perform addition? (Project-Task addition)

This may sound like a simple question, perhaps because a pocket calculator (or an abacus) simply carries out a straight-forward addition process and then there is your solution. But could a single set of functions that starts and runs and stops do this task?

A set of scheduled tasks distributed over various participants over time might not be reliably expressed as a static linear set of traditional logic functions or as the output of a single deep-learning model.

Now, I do not want to get bogged down in word-issues here. Obviously we are talking about ‘functions’ very broadly, so let me try to illustrate what I mean to say (even if I am not saying it very well).

Scenario:

On Friday morning Woz-Bot is given a task: Sometime today (Friday) some people will put in orders for tea. On Monday around lunch time (skip the part about making the tea for now) the Bot should report how many of those cups of tea were made.

This is a relatively well-defined and narrow problem that is both realistic and within practical reach of what we could make a ‘Woz-bot’ able to do. We could even add in the step of making the tea, or coffee, by taking the short-cut of having a networked tea-coffee vending-machine in the office, avoiding for now the robot-physics acrobatics of brewing the tea which is a whole other set of challenges (not impossible, but another topic for another adventure).

Active, Reative, Proactive…what is the right set of words?

Let’s walk through a few examples of how this scenario might play out (for example, Woz-bot might be tasked with doing this every week in preparation for a routine Monday lunch planning meeting).

Schedules

For one thing Woz-bot has no idea when people will put in their orders. You are giving Woz-Bot its job before people have ordered anything (or even decided if they are going to order anything). How long should Woz-Bot wait for each step or part of the overall task? It will have to be decided, somehow, when to do what. Technically, by the ‘letter of the law,’ it could (systematically) wait until midnight Friday and collect the tea orders because the orders come in “on Friday” so at the end of Friday the orders should be there. But this wastes about one third of the possible time available to do the whole task, which is a terrible plan. It also passively assumes that no one needs to be reminded to make their order, which is also a terrible plan. “You didn’t order, so NO TEA FOR YOU!” Is not how you want to start your week…

Once any of the orders are in, you need to see if everything that is needed for that order is in stock (and waiting until the last minute is probably not a good plan for getting this sub-task done well). And if the item is not in stock, you need to find a way to get it. If you can’t order it one way, you need to find another. Every time the overall process is done, you cannot assume that it will be done in the same predictable linear way. 1+1+1+1+1 can be calculated the same way every time, but five cups of tea cannot.

Making Things Work:

Let’s say everything is set up but a few hours before the meeting, the vending machine breaks, or the power goes out. A reactive, or apathetic, Woz-bot would happily deliver an error message instead of tea. Would you like cream with your error message? Whereas any responsible person would look for some other way to make things work, for example call in a food-delivery from a nearby cafe or food-delivery-service (also something a Woz-bot with no robot-body could still do perfectly well). And let’s say there’s one shop that has all the teas, but it can only deliver them several hours early. Otherwise each tea will have to be ordered from a separate shop. Decisions decisions…is old-cold tea ok? Of course it’s not! The Woz-bot must find a way to make things work when each actual carrying-out-of-the-overall-task may be highly, and unpredictably, different, even though all the individual parts of the task are completely well defined and do-able, and those may be the same each time (or most of the time).

The goal here is not to keep adding hypothetical gotcha-questions tasks suggesting that the overall task cannot be done. Of course you can serve a cup of tea. The idea is to see how the nature of a project-task can differ from even the same task when it is purely abstract. All the above obstacles individually can all be accounted for (be done) by any of many solutions, and these can all be woven together somehow. The question is, how? And what may be a good or not-good tool or approach for a given project-task?

Let’s take one more look at this point of schedules, and of single-passive-AI models like LLM-GPT-chat bots. If you knew exactly what the schedule will be, you could pre-plan a set of routines and steps. And a single LLM-GPT model could set that in motion if you asked, just like a home-hub-AI can set a kitchen timer for you. But What happens when there isn’t a simple linear set of instructions to follow? This might be a subtle point but try to zoom in here if you can. From very roughly 1970–2000, AI that was used was what is called “symbolic” AI, meaning that it would follow hand-crafted sets of choices and options: If this happens, do this. If that happens, do that. Aside from that fact that there were limitations to how good these system could get at tasks requiring flexibility, these were in a way ‘external-data friendly tasks that had an ongoing “pointer” (if not literally) bouncing around the giant flow chart of options, reacting to “anything” (as long as you were able to predict that “anything” before hand to write it into the program). Symbolic-AI is multi-event AI. But imagine how a chat-bot might try to manage this task-for-monday when assigned on Friday morning. A really smart bot could probably propose a good ‘path to take’ each time something happened that required a decision. But a single-passive bot, no matter how smart, is like a ‘smart mirror.’ It can reply in highly skilled ways, but it does not make notes for the future. It does not track things over time. It does not keep track of what resources it has. If you could compress the whole task somehow into one-big-task at the last minute, like giving it all the orders and asking it to make one delivery request, it might do that well. But in a situation with many parts and signals and participants happening throughout a schedule, a bot that can only react once to one input is an even worse fit for the task than the Good Old Fashion AI that ran through a pre-written set of commands.

But this is just setting up tea orders, there must be a way to do this. What is missing? How can an AI system, a whole AI-Architecture, with an AI-operating-system if need be, learn to do this task? What does it need aside from having the intelligence to do each separate part? What is the nature of this problem-space?

Skills, Learning, Ability: Architectural Learning

It might sound strange to mix up words that we often habitually use only in specific situations. A function is able to do something…but we rarely speak of the ‘skills’ of functions. People have and gain skills, have abilities, and learn, but we rarely speak of the ‘functions’ of a person. So as we enter a new era where machines, AI, H.sapiens-humans, augmented other organisms (smart ‘animals’), and perhaps aliens, and then hybrid crosses of all the above being possible, we are going to need some better and more general vocabulary. What that vocabulary will be in a future society of general-participants, I have no idea. In the mean-time we can at least trace out the concepts, even if the terminology sounds, or is, awkward.

Hopefully it is clear by now that our Woz-bot is not a single-homunculus-mind that somehow does everything in a monolithic ‘black-box’ blob. Project-Tasks have many participants, and a ‘node’ (a team, or person, or bot, or whatever) assigned to a task in a large interconnected set of projects is a rather fuzzy and fluid notion. In the western world we love the idea of the absolute individual ‘person’ with the absolute individual ‘mind,’ but this is much closer to the fantasy space of so-called ‘symbol-manipulation’ that is somehow pleasurable for some people to imagine, but it does not clearly relate to reality (either a pure-math reality or an applied-math-engineering reality). The whole business of teams and families and organizations and institutions, is something that is much more fraught and vague and uncomfortable in the western world. “One person, one vote!” “One person, one job!” “One man, one mind!” We could make our own satisfying-sounding misogynistic slogan “One man, one cup of tea!” but that does not get us any closer to the reality of how our Woz-bot task can be done effectively.

We may not be as comfortable with the concept of a team or an organization as we are with the notion of “One man!” but we will need to get more comfortable. We need to start thinking of “an AI” as an architecture (with many parts). And this architecture will likely be often rather diffusely entwined with teams of collaborating participants taking on different assigned project roles, sometime as a participant, even a leader, other times as a small-sub-task helper who is assigned a tiny duty to do to spec.

And, if odd, we need to be able to talk about not just the functions and abilities of that architecture, but also learning, training, skills, etc. When mixed into team project environments, individuals and groups of all combinations of ‘bot’ and ‘person’ parts, and other animals too, working on various tasks and subtasks, will train and learn to do tasks they could not do before. A team learns. A team of only ‘people’ also has, in a sense, or is, in a sense, an architecture.

Most animal brains (depending on how you define “animal” and “brain” (a single-cell would not be a great example of this)), a H.sapiens-human brain for example, is not a monolithic blob. It has many parts and regions with more or less generalized or specialized functions. When people talk to each other they usually are not sending signals directly into a brain region (with a probe), and so we can ignore all these brain-parts (or brain-mind-body parts) and just talk ‘person to person!’, ‘one man one voice!’ and all. But when we build and interact with AI (for example, as we now design our Woz-Bot AI to help us with Tea), we may try to make a ‘user interface’ that is as simple as possible (perhaps a person-mode), but sometimes we may wish to deal more directly and separately with the parts in an AI architecture than we usually do with the less visible parts of the H.sapiens-human brain. Or maybe not. Probably too early to tell.

Participation

There is perhaps a subtle seeming difference between a simple-function that under ideal circumstances could order a cup of tea and a project-participating AI that can effectively manage the task of getting tea. The goal is making sure the task gets done, reporting status and outcomes, and being just as accountable as a H.sapiens-human who is assigned to the same task should be (ignoring for now how rarely H.sapiens-humans are reliable). And this may go back to the idea of a passive-reactive-reflective function, vs. a participating-architecture.

Another facet of this, not to be gone into very deeply here, is object-handling and object-relationship spaces. Some AI have tasks that do not require any object handling. Generally speaking, this was just about everything before 2022, because no one could figure out how to do any AI object handling. So there certainly are a lot of individual narrow tasks that an AI, or part of an AI-architecture, can do, on a simpler-level. And in the past AI was largely about finding ways to get some tasks done by completely avoiding the topics I am trying to focus on here: participation, mind-state, architectures, externalization, project-tasks, generalized-stem, object-relationship-spaces, definition-behaviors, coordinated-decision-making, system-collapse, etc.

Even ‘general’ assistants like Siri or Cortana (or perhaps even Eliza in some sense), were able to reliably help with some very specific tasks, side-stepping messy issues.

But the challenge we are trying to take up here is making the however elusive step beyond the simpler mode of finite passive functions, to create AI-Architectures that we will train and teach to be able to participate not just in projects like 1+1+1+1+1 = 5, but also in projects involving five cups of tea.

Cut-Ups: Coordinating parts of a clear task.

A ‘cut-up’ (as the term is used here) is a common technique in student-centered-learning and constructivist educational pedagogy, where the the activity is not done simply by one student but rather the instructions and data for how to do the activity are, sometimes quite literally, cut up and distributed to members of a group (or team) of students, who need to communicate, exchange information, and coordinate, in order to do the same task.

In a way the cups of tea example may be (perhaps) a good example of a cut-up project, because each of the six participants (the five ordering tea and the one making the tea) have different parts of the overall set of information about the problem. This is also perhaps a good starting example, because we do not need (yet) to have any elaborate network of Multipoint-Conference data exchange between all the people involved: the five drinkers can just send their one signal directly to the one Woz-bot. This is a much simpler starting case and still very realistic or practical. E.g. Real life restaurant scenes are generally a lot like this. The waiter or waitress comes and tells you that you can order. You do one round-robin around the table, each person telling the waiter or waitress what they order, then the waiter or waitress goes off and hands their notes-slip(report) to the kitchen, and the often the food delivered by someone else, even by conveyor-belt if you are in Japan.

This cut-up example (a task cut-up into sub-tasks and distributed among multiple participants) may seem a mere formality with the underlying or ultimate task being the same “One task, one man!” But is it really still the same? Let’s go back to the single-passive-reflective deep-learning model again (because for the most part that is how people define ‘AI’ (not all this architecture nonsense). Think about a task that you could ask an AI Bot, like a chat-bot or cli-api bot. For example…let’s ask: “What is 1+1+1+1+1=?” (Although…as another side top, and don’t say this too loudly but AI bot’s are not reliably good at counting…and so that’s another parallel sub-function that we need to address eventually in our architectures, but for now let’s assume that counting to 5, give or take a few orders of magnitude, is close enough.) So you ask your AI bot, what is 1+1+1+1+1? And it says, 50. Close enough. Ok, now let’s make this task a cut-up. Let’s have six AI-Bots, and you give five of the bot’s a number (all 1 in this case… “All ONE!” “One Bot, One 1”), and tell them that they need to find out what numbers all the bots were given and then tell the sixth bot to report back to you what the sum of all the numbers was. Alternately, you could use five bots and tell them to find out what numbers everyone else got and they all give you the total sum, whatever.

Still look like the same task?…How are passive-reflective-AI units going to do this? Make a flow-chart to trace out how a set of five or six bots will, given only your initial instructions to each bot, complete the task. How will a bot that can only react once to one input manage the parts of this task?

Think about what additional abilities need to be within the architecture of an AI for it to be able to do this. Counting to five is not all that difficult a task. In many ways, it’s still the same task. There are five numbers, add them up and tell me the sum. But when a function, when a job, when a task, has the formalities of manifestation in a project-task-space, the abilities required to accomplish that task are sometimes not exactly the same as when the task exists in isolation and abstraction. A passive-reflective-generative AI will not do well in a cut-up activity. But with AI-architectures, and AI-Operating systems, there is no reason why the AI cannot learn and gain the abilities to do these tasks.

There are some parallels, hopefully, between the Woz-Bot counting up tea-numbers that comes from other participants, and the cut-up activity where each participant has part of the activity-data-and-instructions and so information must be exchanged to carry out the task. The Woz-bot tea scenario is a nicely narrow situation that is greatly simplified so that under very highly constrained situations it could be carried out even by a Good-Old-Fashioned-AI string of pre-designed steps. And maybe as a microcosm of the limitations of GOFAI systems, how often would such a hand crafted set of steps be useful in the real world? How would using a GOFAI system alone compare with trying to use a Chatbot? What are the different strengths and weaknesses? How might you try to combine those two ways of approaching the problem? Etc. There is also the question of whether a pre-arranged GOFAI tea-bot situation really is a cut-up, if all the instructions and data are pre-set and there does not need to be an exchange of data about the problem and how to solve it, just a pre-arranged set of signals you know in advance that you will have to send and receive. So the question may be more, what happens when the Woz-Tea scenario becomes a cut-up, rather than saying a cut-up can be reduced to pre-arranged steps (possibly a definition semantics debate). Call the process what you will, I hope the tea is good…

Herding-Cats and defending against disinformation.

When calling an api or cli function you ‘should’ get a proper response, but real life projects involve participants who will go to great lengths to not be on the same page as everyone else. Sending information to, and getting information from, simple functions only needs to be done ‘once’ (or in line with Claud Shannon’s (and Alan Turing’s) information/communication theory to ensure the data are intact (hard-drive data storage is maybe an interesting case-study in the nuances of this, as is network-signal checking, etc.), but when ‘people’ are the nodes (and some cultures are more aware of this than others) there needs to be a lot of deft and redundant communication to make sure everyone is one the same page (and that participants are not ‘hallucinating’ their own imagined project tasks). And, it appears to be a fact of the world, there will be various agents who will for various reasons attack your project with disinformation. And this disinformation will derail, destroy, and collapse your project if you do not successfully defend against it. (So perhaps talking to ‘people’ is actually more like reading data from an advanced hard drive…that someone is smashing with a hammer in a microwave oven…)

Bread-Crumb-Paths: Not a simple authoritative task frame.

A possible example of a ‘bread crumbs’ type project trajectory may be where the tea-brewing machine (the teapot, not the Woz-bot) breaks and it is not clear how the tea will be made; in such a case the Woz-bot needs to explore or random-walk through various possible options and actively find, pick, invent, discover, or create and manage one or several-together previously unspecified courses of action. In some cases a bot can just follow instructions, or find a way to a clear destination. In other cases the plan is modified constantly along the way, and all you see at any given time is the next bread-crumb and what that might add to what you know cumulatively about what on earth is going on and what you should do next.

Since the Tea scenareo predictably should end with tea at noon on Monday, which is pretty specific, no version of this might be a great example of a breadcrumbs type task. Perhaps doing a catering job where every aspect is completely unset at the beginning would be better, or a detective tracking down a problem (such as a software bug or a sig-sigma hunt for problems where you only see clues but do not know what the overall situation will turn out to be). All you can do is follow the next lead and re-plan from there.

  • Allocation of Tasks: creating and assigning sub-tasks
  • Making plans, revising plans.
  • Checking and testing.
  • sub-module-sub-tasks

Plans, Plans, Project-State and Mindstate

Let’s try to zoom in again on the differences between a function that is run because something turns it on (and stops at the end of that single-reaction, regardless of any ongoing situation) and a ‘function’ that is happening in a project-context, though at a given point in time those two might look, or be, the same.

This is likely a much too simple spectrum, but imagine that at one end you have items like a light bulb or light switch that simply reacts or functions based on what is done to it. In the middle (of our shiny new spectrum) you have co-participation, such as cut-up tasks, involving coordination and exchange of external data. Then farther along that spectrum you have more open-ended choices: leadership, navigation, decision type ‘functions,’ ‘abilities,’ ‘skills,’ and ‘learning.’ A light switch does not need to decide very much at all. A pocket calculator does not need to decide very much.

The smart-reaction is a fascinating area of this overall space. In some ways a gpt-llm-ai is all about decisions, it can make incredible decisions that people had given up on technology ever being able to do. But in another way, it’s so very much like a light-switch, or light reflecting off the surface of a mirror: there is one reaction to one event, by definition. You flip the switch, the switch is flipped, action done. You put in an input, you get an output, action done. It’s as though a passive-reflective-reactive AI is like an incredibly elaborate light switch, where there are billions of subtle-dimensions to just ever so slightly how you flip the switch, leading to a great variety of ‘smart’ ways that light can flash on and then go off again. And that is very useful, potentially. But it is still, by definition, a single flash in the dark, with eternal amnesiac darkness behind and ahead of that flash. As a module in an architecture, it is very amazing. It is a project-task-fragment like a mirror in an empty attic lying on the floor, full of potential uses, but unless being used for something it just lies on the floor, perhaps next to a cast-away light switch, and both are completely indifferent about that.

Back to our spectrum: Some devices just react simply, like a bulb or a switch or a calculator. And even a single passive reflective AI just reacts with a single-switch-flash as well. Then moving further on we have our Five ai-bots given a basic math cut-up question, the task is still basic addition, and the problem space is moderately small, but they need to manage and choose and decide a bit more. Then as you move further you have allocating tasks, and assigning tasks, and designing functions, and making plans, and checking plans, etc.

What are you tracking?

When you move beyond blindly following one instruction, what exactly do you need to keep track of?

What happens if you try to put a Good Old Fashioned AI system, and it might depend which, on this spectrum? I am probably just muddying and mangling this, but we might think about ‘smart or dumb’ elements and architectures. In some sense, it doesn’t make sense to compare an architecture of dumb-switches to a single smart or dumb switch. And the more we start asking questions about the categories and differences between symbolic-AI (analytical system-2) and sub-symbolic (non-analytical system-1), with the interesting reversal of which is fast and which is slow in brains or cpu’s with alu’s, or even differential analyzers and EDVACs), the more we will probably find that the whole dichotomy of so-called symbolic vs. subsymbolic is not at all as clean and simple, and a real partition of the world, we might have fallen into assuming.

Project State:

  • participants
  • roles
  • tasks
  • schedules and timeline
  • reports
  • end-users
  • stake-holders
  • goals
  • deliverables
  • user-stories
  • documentation
  • etc.

Memory

Attention

Goals

Priorities

Collaborators

Options

Backup Plans

Regulations

Best Practice

Due Diligence

Mind-State:

What happens when we try to step away from a passive-reflective-reactive mode? For example a mirror, however fascinating, passively reflectively reacts to whatever signal comes in. This is kind of like a normal ‘function’ that you ‘call.’ You call the function with input (sometimes none) and then there is an ‘output’ (and or some action performed). But there is no mind-state about a project-state here, not yet.

Context helps so let’s return to the Woz-bot making tea.

The Woz-Bot needs to ask itself repeatedly, “To what projects am I assigned?” It will need to track a whole lot of things.

Again, let’s start with something simple. We could make a ridiculous story of a cacophony of trackable factors, but starting with a realistic simple MVP example is often a good idea.

For a given project:

- What is the overall project?

- What is the status of the overall project?

- What is the schedule for the overall project?

- What is the current time?

- WHat is the overall task-subtask set for the project?

- What is the current task (set) for the project?

- What resources do I have available?

- Can I add new resources?

- What resources have I created?

- Should I share a resource I created?

- Has the resource been adequately tested?

- What am I doing now?

- Am I on schedule?

- What is the likelihood of a problem arising?
- Is there a task I should be preparing for?

- Have other instructions come in since I last checked

that might modify the project (or even cancel it)?

Plans:

- — What are the requirements I need to do somehow?

- What is the main plan-A?

- What are the backup plans?
- Was the plan resized?

- Have I gotten feedback and ‘a second pair of eyes’ on the plan?

- Have I gotten permission for the plan?

- Do I need permission for the plan?

Tasks:

- What tasks am I planning?

- What tasks will I allocate to other participants?

- To whom should I allocate a given task?

- How will I follow-up to make sure the task is being done?

- What would indicate a problem with the task and a high likely hood of failure?

- What is the schedule status?

- What specialty sub-tasks are there to be allocated to special-units?

- research

- communication

- multimedia tasks

- code-running tasks

- What is the Project-Status:

- Are there security concerns?

- Are there data hygiene concerns?

- Are there system collapse and disinformation concerns?

- Is anything needed or currently unknown?

- What am I planning next?

- What outcomes do I need when?

- What do I need to initiate?

- What do I need to allocate?

- What do I need to revise?

- Who do I need to report to?

- Am I in a loop of repeating the same mistakes?

- Am I causing other participants and parts of the project to become stuck in loops of repeated mistakes?

- Am I helping or hindering?

Action Items:

- What does the future allow?

- What does the future require?

All that might seem like a bit much, but it was meant to be more of a general look around than something to actually try to do right now. When you are designing a specific system to do a specific thing, use that context. And it is often a good idea to start with a very-minimal “Minimal Viable Product” as it is sometimes called. Start somewhere, and move from there.

Externalization: Project Object Database, Object Relationship Database & Process-Step Object-Database

Another topic that sits at the intersection between 5-Trees+Mindstate and Process-Step-Analysis (detailed elsewhere) is the topic of managing externalized project data, or project-object-databases.

The Project Object Database:

Even in a situation where there isn’t anything to track outside of the seemingly simple task you are asked to do, that task of tracking the object-relationships between parts of that problem->process->solution can be quite a juggling act, especially when you need to be able to externalize and check the parts and their relationships. For example, having a passive-reactive-reflective AI do a simple math word problem is (let’s say) easily done. You put in the question, you get the answer. But let’s add in externalization and confirmation; it is not enough to produce an answer in a black-box way (and this is a real concern for people using AI), show exactly what you did and how and why and how your confirmed it and confirmed that all the inputs and outputs to your deductive process are correct. In both cases the answer may be the same, but in the second case there is a vast book-keeping and signal-sending maze that must be gone through. And yes, each part of the maze in isolation can be done by the simple-reactive-reflective-AI component, but what architecture can do the entire process?

Though it is entirely possible that I am barking up the wrong tree with my dogged fixation on ‘externalization,’ an interesting part of project-tasks as separate from purely-abstract tasks is the formality of externalization. Let’s go back to the very simple 1+1 task, forgetting about tea for the time being (can one ever really forget about tea…), and even forgetting about reporting and roles and all those other things we will have to account for later. Imagine your AI task really is as simple as 1+1+1+1+1=5, but each ‘1’ comes from a different source, in different data format, etc. In retrospect it looks very simple: 1+1+1+1+1=5. But without context at the beginning of the process, if you didn’t know it was going to be that simple, you would still need a larger process-step framework, architecture, which is most likely more involved than you would at first guess. What are the values? What are the operators? How many steps are there? Are the steps in the right order? How can you check? What thing from step one is supposed to go into step two? How do you check for errors? How do you find a mistake? What do you do when you find a mistake? What is the last step? (Process Step Analysis is a whole other important branch area of details.)

And the ‘distance’ from 1+1+1+1+1=5 to blue + red + orange + green + yellow = five colors, or darjeeling + assam + Earl Grey + Celan + Pu-erh = five cups of tea, or more steps involved, or a mix of numbers and words, and then adding in auditing and reporting and parts of the task modified and done by different participants: there needs to be an project-object, process-step-object, object-relationship ‘database’ in the architecture. No matter how good a given sub-task-process is at handling that task internally, projects on the whole require external, or externalized, data and processes.

Two-Brains and A Connector-Thing: Sharing Objects

There are a number of marvelous oddities around the separation of skills, rolls, and tasks (and we need to be careful what we call these groups, or we may mistakenly lump them together or split them apart). For example, the models that are not only the best but so far the only models that can handle-objects (I will try to summarize that shortly) are especially ‘internal’ in how they work; but the whole point of sharing things around a group-project-space where we work together on tasks is that those things have to be shareable. Just as people working together may have difficulty and take some time in explaining and documenting what they are thinking or seeing or realizing or remembering (or just notes from some event that only they attended). Just as you cannot readily share something somewhere deep in your brain, so a Large-Language-Model cannot readily share the details of how it is ‘handling objects.’

The mammalian brain, perhaps by coincidence but perhaps not, also has a curiously similar structure.

So let us conclude with the sad but also somewhat comical image of the perhaps apocryphal problem that a person who has had their corpus callosum cut (for example to treat epilepsy) can have when getting dressed in the morning. Their right-brain (Using their left hand) does up the buttons on their shirt getting dressed for work…while at the same time their left-brain (using the right hand) undoes all the buttons…for whatever reason. To work together we need to share the details about what we are doing. And while we might like to imagine that tasks are done by “One Man, One Action!” thankfully we live in quite a different reality where tasks are does by networks and webs of participants and parts and processes, and while we like to argue about the semantics of labels and groups, whatever words we end up using (you call it ‘the brain,’ she calls it ‘the collective of brains’, they call it ‘mind-soul-box,’ whatever the agreement or disagreement in labels…) the objects must be shared between parts for things to work properly. It is not enough to ‘train’ one part or sub-part to ‘just do’ the task; the architecture of all the parts (however described and labeled) needs to ‘learn’ how to get that task done.

Never-Mind vs. Neurotic Worry-Monger

A mirror, or a simple passive generative AI does not ask any of these questions. It has no project-state-mind-state. There can be light, no light, happy events, sad events; the mirror has no mind and is passively unconcerned. A normal computer function is likewise passively unconcerned; it is completely indifferent as to whether you get a truly delicious cup of tea or an error message or nothing at all. We want an AI that is very concerned, and scanning and tracking and testing and pondering all the parts of the project-space and their mind-space about what they are doing and how that impacts the other participants as well.

Zen-Mind, Feedback, & Mindful AI

As a parting idea, sometimes no-mind is translated interchangeably with Zen-mind (and I have probably flippantly done the same myself). But there are some interesting subtleties that are likely relevant somewhere for AI. There are, depending on the author, translation, school of thought, etc., various aspects of Zen-mind or mindfulness which seem appropriate to discuss for AI mindstate.

  • awareness
  • feedback
  • compassionate & caring
  • perspective
  • not-distracted
  • not-fixated
  • not-distorted

To the extent possible, we want an AI-Architecture that is aware of feedback without being overwhelmed, that somehow manages to keep perspective and not be distracted, and that is concerned to not cause harm to others and to, if possible, be aware bad things happening to others as something that should not happen. And in my view the sectarian-zeal of fundamentalist relativism has shifted the overall discussion (which is a great discussion to have) too far in the nihilistic-hopeless-apathy direction: I would argue there is a lot of pragmatic low-hanging fruit where STEM and ethics and AI and some forms of compassion and empathy naturally overlap in realistic and practical ways.

Part of the occasional disagreement in semantics is around terms like ‘unconcern.’ There are some people who advocate for a kind of super-extreme anti-world, nihilistic, view of “mindfulness” where they argue for a kind of anti-meaning oblivion. I am not convinced that this is a coherent and meaningful approach, and perhaps if that is their goal they would not even disagree. More broadly ‘unconcern’ in a context of ‘Zen Mind’ (and the history of Hinduism and Buddhism) is not meant in a super-extreme way such as, if you are meditating and the person next to you catches fire then you should ignore their screens and empty your mind being completely unconcerned the health and possible suffering of everyone around you. Rather, the meaning of terms like ‘unconcern,’ or ‘empty mind,’ is usually in the context of having a balanced and healthy awareness and not being so extremely concerned with, or blindly fixated on, any particular tangent so that you can maintain a healthy macro focus on your state, the world’s state, your fellow-particiant’s state, and see how to maintain and cultivate a harmonious and productive world.

System Collapse and Ethics

As much mileage as we can get out of a practical model of system collapse will be extremely valuable, and there is more low-hanging fruit than might be suspected.

A Note on, and a Plea for, Making Useful Connections: STEM

Please do not mistake my attempts to articulate the observation that some perspectives on, and applications of, math-logic are narrow (e.g. my sometimes disparaging terms “pure-math,” “pure-land,” and criticisms of “symbol manipulation,” etc.) as somehow meaning that math and logic either should not, or cannot be connected with other areas, or somehow that pure-research is in any way bad: it is not. When real life situations call for people with different backgrounds and areas of expertise to work together on a project, some people have prohibitively narrow ideologies and deliberately prevent the collaboration. But STEM itself is potentially more interconnected than people have usually assumed. The history of generalizing STEM and discovering or inventing connections between different disciplines and categories of systems across STEM (which has too many sub-areas to make a fully exhaustive acronym) is a fascinating topic that to some depth should be part of general education for all people.

As of 2023, we are teetering between paradigms with a real possibility that we will either fall back (into dogmatically believing that participants cannot collaborate) or drag on in a half-aware limbo. While it may be tempting to say that it was only very recently that people started thinking of a general STEM that connects all the different areas, saying this would be getting ahead of the starting gun. As of 2023 there is still no clear concept of a generalized STEM, and discussion of applications of and integrations of Math, Science, Data-Science, Engineering, Computer Science, Medicine, Epidemiology, Statistics, Logic, Linguistics, etc., bare all the hallmarks of a lack of familiarity with history and across disciplines. A crucial work in this area is the blessedly back-in-print Sir. Eric Ashby’s “Technology & The Academics” which is a short and brilliant book everyone should read about the evolution of conceptions of science and science education in the late 1800’s and early 1900’s. Some of the most poignant history-shaping examples of this topic are laid out in the not-short biography of Alan Turing by Andrew Hodges, “Enigma” (which I can only hope everyone will read, as it is a phenomenal book on history, culture, technology, and WWII). Once you see these notions of STEM areas being connect-able or not-connectable, you should start to pick up on how other books will add to this story of intellectual history (such as the biography of Claud Shannon, “A Mind at Play,” by Jimmy Soni, or “The Man from the Future,” by Ananyo Bhattacharya about John von Neumann) or how a book may appallingly mangle and confuse events and concepts such as “The Idea Factory: Bell Labs and the Great Age of American Innovation” (not a recommended book). The culture, language, and psychology around STEM is fascinating and the very much still-in-progress development of understanding how parts of the world work is something we should collaborate and participate on carefully just as we should be working to make sure that physical planets can continue to be habitable for multicellular life.

In Closing

We are accustomed to talking about a given very specific ai-model learning or training for a very narrow subtask, but we do not have a very good vocabulary for generalizing similar concepts so that we can zoom out to the bigger picture. The (perhaps) time tested routine of gathering data to train a model and using that model to do something is not a bad thing and may well be a general part of daily life for everyone in the future, a basic process just like using a calculator, or a text-messaging program (…though I’m not sure if those will really be recognizable in the future. Up until recently the abacus and slide-rule were broadly used, but some tools do not last). We must not confine all imaginations to only thinking about ai and models as the single-reactive components they have been so far, used for the single reactive uses they have been used-for so far.

How do we teach an architecture to participate in a project and manage project-state and its own mind-state in a mindful way? How do we help an architecture to ‘learn’ to have the skills and abilities to manage a project-state and its own mind-state in a mindful way? How can we expand the set of jobs that AI can get done?

Back to the main Object-Space paper, and ready for the next pass.

We are hopefully closer to clearly discussing how in order for an AI

to do a concrete task such a cut-up problem, that AI is going to need to use a variety of resources, or perhaps to be the intersection of a variety of resources, to weave together different kinds of solutions to different kinds of tasks and problems that are connected by the context of the overall project-task. We are hopefully closer to clearly discussing how doing project-tasks require architectures.

In all this we may now see an emerging question: “Ok, we need to, the AI needs to, keep track of stuff. What stuff? What kinds of parts are there in this space that need to fit together?” That question brings us, roundly roundly (henry-iv-part-1/Act1/Scene2/), back to the Howth or Alnwick Castle and Environs of Object Relationship Spaces, which is something of a wrapper for System and Definition Behavior Studies, which is something of a wrapper for better understanding the question of what STEM areas are and how they relate to each other and the rest of the world. What is science specifically, and how do we use it? Let’s put the tanks on and dive back into the details. Here are some links. Hope to see you on another riverrun.

About The Series

This mini-article is part of a series to support clear discussions about Artificial Intelligence (AI-ML). A more in-depth discussion and framework proposal is available in this github repo:

https://github.com/lineality/object_relationship_spaces_ai_ml

--

--