AI’s Counting Problems

Geoffrey Gordon Ashbrook
20 min readMay 2, 2023

--

AI: Framework Tools & Framework Learning

Epiphenomena in AI Thinking: Frameworks, Revisions, Structures, & Framework-Learning

Topic & Agenda

Generative models (e.g. chatGPT) can use frameworks and structures to enhance their default ability-levels. Frameworks can be used to study AI abilities and inabilities. Potentially, framework tools can be used as part of model training (Framework Learning).

Intro

The use of frameworks with Generative-AI as described here is not a sure-fire way to fix mistakes in GPT-output every time, uniformly, but it is very interesting. Frameworks can be effective in helping chatGPT to arrive at answers it otherwise makes mistakes finding or could not find. In a research context, the use of frameworks provides very interesting information about how chatGPT can try to organize and explain what it is doing. The phrasing of that last sentence may be nuanced as likely it is very difficult to interpret exactly what the output suggests. Nevertheless, the use of Frameworks appears to be a kind of window into the ‘thoughts’ of generative-AI that are not always visible.

The specific puzzle-problems that this framework was designed using are Douglass Hofstedter’s Abstract-Short-String Analogies. Infact, I think this specific framework evolved from trying (at first without any methods) to help chatGPT to solve increasingly difficult Hofstedter-String-Analogy problems, helping GPT to plan out what it is doing and keep track of the details.

Techniques

  • Frameworks for organizing problem solving
  • Frameworks for structuring the steps and explanation of an answer
  • Structure / notation for how to write an answer (and key pieces of information)
  • Giving a whole framework
  • Giving a framework as step by step instructions
  • Use of revision
  • Use of repetition and comparison
  • Explanation of specific things such as: brainstorming, explaining reasoning (which by default GPT lacked a practical understanding of)
  • Externalization

Notable Issues

ChatGPT has a number of notable weaknesses which can be either studied or perhaps helped by using frameworks and structures:

1. ChatGPT becomes confused between types of outlining:

- very general process outline steps (as one might find in a generic business event)

vs.

- steps to solve a specific problem (as in how to approach solving a specific puzzle)

.vs

- the actual solution itself (as in an actual problem solution, as in solving a specific math word problem)

2. ChatGPT becomes confused between modes of outlining:

- explanations as stream of consciousness nonsense (perhaps as one might find in a generic business event)

vs.

- explanations focusing on details in a specific problem set (as in an actual problem solution, as in solving a specific math word-problem)

There is perplexing variation in chatGPT’s ability to keep track of details.

Sometimes chatGPT is extremely precise over many details, sometimes it is wildly wrong about everything (even things it just said itself a few words earlier). I wonder if this is a chat-novelty-ness setting, as in the standard variable “temperature” sampling of generated text from as in AL-ML textbooks.

2. Problems with Counting:

That some AI have difficulty with fast-counting on the fly should not be a surprise in any way: the issue was predicted in the 70’s by Hofstedter in a book everyone knows and the surprise clearly has been that subsymbolic generative models can count at all, not that such AI can’t count perfectly. (If you predict that someone will never be able to walk again, and one day they manage to tentatively stand, you cannot reasonably claim to be shocked that they are not doing advanced acrobatics.) It is interesting that suggesting using a framework for how GPT writes (e.g. a special counting format or notation) seems to significantly help chatGPT to avoid making counting mistakes.

3. Generally good at following a proposed framework:

I would not have been surprised if chatGPT were bad at following, or had zero ability to follow, a procedural framework but it does generally very well at it. From the examples I saw, chatGPT follows even a rather long structured framework in its entirety, often without error, if the problem is one it can solve without drama. But when the problem poses problems, then, interestingly, the whole use of the framework collapses too in a cascade of memory-fragmentation and loss of focus.

4. “PRINCE: O monstrous! Eleven buckram men grown out of two!”

In one of the most wonderful scenes in all of western literature, one of the most precious and wonderful things H.sapiens-humans have ever created perhaps, is where Falstaff and Prince Hal are arguing about a botched robbery they tried to pull off, and Falstaff simply cannot help himself from ridiculous exaggerations and creative fictional insertions, such that his story has no feasible-logical coherence. For example at the beginning of a paragraph there are two people, but by the end those two have ‘grown’ into eleven people! And within the span of one not terribly long sentence, at the beginning fighters are seen in bold green outfits, but by the end: “for it was so dark, Hal, that thou couldst not see thy hand.”

5. Losing the Thread…Sometimes

Sometimes ChatGPT will be completely on-topic and focused, at other times there will be a mixed level, and at other times the language generated can become incoherent both in terms of the overall topic and even internally. For example, frequently use of the framework will allow chatGPT to produce and explain a valid answer to a puzzle that without the framework chatGPT would produce a terse (unexplained) incorrect answer, but later in the explanation chatGPT will lose ‘the thread’ so to speak and lose track of that correct answer. The point is not whether ‘focus’ and ‘thread’ are perfect realist words to describe what is happening, but just to communicate the phenomena: something we do not yet understand is happening within the AI.

The Gravity of Perpetual Regeneration

While GPT is able to stick to details, rigor, frameworks, etc., there is a default tendency to just wildly make things up. Very much like the scene from Shakespeare, where there is an uncontrollable generative force that keeps recasting and recasting the same details until the whole narrative does not make consistent logical sense anymore. In many cases this default urge to change thing is not a visible problem, but as though a force of nature only held a bay, when things go wrong this monster of change rips through the threads of logic.

Variations on This Output Structuring Framework

1. Giving the framework all at once at the start.

2. Asking each step one at a time.

3. More or less repetition

Memory

There may be a ‘memory’ factor in various aspects regarding how much can fit into a ‘conversation’ before chatGPT cannot keep details straight anymore.

It is fascinating that there appears to be some kind of virtual-epiphenomena of memory that exists in the concept-based stream of thoughts from the AI. For example, in cases where there is only mild re-generation of the same topics you can see that the AI is keeping track of concepts, remembering concepts and relationships, but making no attempt to remember the semantics with which those were previously described. This can be dangerous, as accidental changes to technical details can cause bugs in the solution working out (when solving a problem).

To some extent this framework idea was inspired by lines from Dr. Sebastien Bubeck’s event “Sparks of AGI: early experiments with GPT-4” https://www.youtube.com/watch?v=qbIk7-JPB2c

where he talks about word problems and GPT’s ability to sometimes catch mistakes if it can juxtapose the right elements as it generates new text. In a sense the Framework idea here is to try to systematically trigger this self-correcting behavior by way of using the same ‘organizational tools’ taught to H.sapiens-human children (as it seems to me that H.sapiens-humans when untrained share a very great deal indeed with generative AI. ‘Revise your work!’ ‘Show your work!’ ‘Show your steps!’ ‘Explain your points.’ It takes decades of schooling for some people to, very begrudgingly, learn to communicate details and solve STEM problems coherently, and many people never learn to manage it their whole lives.

And along with ‘self-correction’ Dr. Sebastien Bubeck also says that AI cannot do ‘real planning,’ in fact on the slides these two topics seem to be the same for Dr. Bubeck but I cannot find a clear definition from him of ‘real planning.’ Perhaps he means the ‘planning’ needed to solve a math word problem. But one of the interesting things I found using frameworks with the lower-level public chatGPT model (not the fancy models Dr. Bubeck has access to) is that chatGPT really can produce a very logical and effective plan and can carry it out fully and systematically, apply it to the problem posed. I encourage you to experiment yourself, modifying and using the framework. As Dr. Bubeck says: “Don’t stop there!” Whatever you find, keep trying, keep pushing, see what more you uncover. And publish your findings so we can learn from them.

Memory and Granularity

Another aspect of the ‘memory’ issue is how detailed and granular and split vs. lumped to make the structure-framework. Perhaps in terms of a Kasparov-Event-Horizon, at what point does the scale of text (or number steps and scale of layers) making up the framework start to crowd out what is happening? In some sense this mimics the evolution of computer hardware as back when “computers” were animals not machines, larger problems were broken down into structured smaller problems (such as basic addition that only slightly trained H.sapiens-humans could do). This breakdown-into-steps eventually became how digital computers carry out big math problems by having each part of a process broken down into granular boolean logic operations. In a sense this process is wrapping around again, by teaching person-level-AI (machines) to follow the same break-down-process steps that H.sapiens-humans eventually handed off to machines. One thing to experiment with for sure is how short or long to make parts of the framework. Early versions were 12–14 steps long, with each part of revising and rewriting drafts and brainstorming and outlining broken down as much as possible. But at some point (but which point?) spreading those parts out makes it more difficult for the AI to follow with its concept-based understanding of the situation that likes to ignore the individual words and details.

Indeed, the basic split in ways of using the framework is to:

1. Just give the whole framework and problem to the AI and say: hey, just to it. Here’s a problem, use that framework to solve it.

or

2. Having the H.sapiens-human manually enter each step of the framework, sometimes with reminders of the past conversation where the AI starts to ‘forget’ what happened so far back.

(There may be a rhyme here of the evolution of neural network architectures, where recursive (RNN), then then LSTM models were used to ‘retain’ threads of learning over time, which then were superseded by ‘transformers’ (which are the “T” in GPT).)

Part of what I find fascinating here is that GPT4 can use other programs and software: so why can’t it use a program to remind itself of the details? Is part of the trick of getting frameworks to work, being able to train the AI to bother to use external tools (again, like an animal).

Externalization

Another recurring theme here likely is “externalization” which may be a continual architecture element where various processing is (perhaps best) done ‘internally’ ‘end-to-end’ and in other situations there are reasons or requirements to externalize data.

Externalization is a persistent many leveled part of this topic, including comparing how H.sapiens-humans or AI do the same task. H.sapiens-humans need to learn to use external tools (pen, paper, slide-rule, etc.) to solve puzzles and document their answers in clear step by step explanations of what they are doing and why, and ever checking and rechecking to catch inevitable mistakes. It is with rigorous use of external tools, frameworks, structures, that the mammalian mind vaguely, and very occasionally, approximates STEM rigor.

same externalize & structure idea, but directed only H.sapiens-humans’ problems

And another part of Externalization (gone into in more detail in the larger paper) is the many leveled topic of projects, participants, and components all needing to share information with each other.

Explanation

Another possible aspect here in various respects is model-explanation, or rather specific-output explanation. A likely perpetual need for a variety of social and practical reasons is for the output of AI to be explainable. Though perhaps not true, this is the reason often given for hospitals to have canceled their collaboration with IBM’s watson, the medical staff needed ‘explanations’ of why the AI models were predicting but the model was a ‘black box.’

Here we possibly have the option of having the AI explain to some extent what chain of reasoning (or some such thing) it is using to arrive at the answer. In some cases this may be useful, as where there is a clear incoherence in the explanation a wrong answer is even more obvious.

That the articulation of the explanation of the output changes but is similar is interesting. It is too early to say what is going on with ‘threads of reasoning’ in generative AI, or whether attempts to be rigorous are of any use.

Memory as Concept and Theme

Another area where we do not know how memory works inside mammalian brains, it is unclear if there is a form of ‘memory’ that exists as an emergent layer with (which also relates to externalization, Machine vs. Human, etc.).

Overall the behavior of having ever-new-stream-of-consciousness near-coherence by the deep learning AI system seems extremely similar to H.sapiens-humans who violently rebel against feedback, discipline, STEM, external checks and tests, and who in projects without a project management framework are virtually 100% guaranteed to destroy everything by (like the AI) constantly changing everything including attempting to make retroactive changes. These similarities are likely significant one way or another (two black boxes).

And model-explanation and planning (or ‘real planning’ whatever Dr. Bubeck means by that) might likewise be entangled with each other. While in some ways explaining-the-present-or-past and planning-the-future(path) are different, they very much converge around explaining a pathway to a solution to a problem which users of AI (such as patients and doctors in a hospital) want to get as much information about as possible: why, how, is the AI giving the answer that it is? What steps did it take? And perhaps this external framework is some part of that puzzle.

Framework Learning & Framework-Based Training

It is likely at least worth experimenting to have the results of AI’s use of frameworks (perhaps as in reinforcement learning by success or failure) fed back into that AI; especially since this could be automated and done on scale. Imagine if chatGPT could be reinforcement trained based on gazillions of attempts to solve all kinds of problems using externalization frameworks, tools, and structures?

Either:

A. rewarding the AI for using a framework to solve a problem, or

or

B. to show labeled examples of itself succeeding or failing (where what goes into the pool of training data is past data that have been put through a framework by the AI itself) so the AI might learn to see the difference between the mode of effective-tool-use and the destructiveness of incoherent novelty.

And philosophically this might have implications for ethics and other generalization of skills, if having the AI use STEM-best-practice to evaluate and improve on past performance is shown to be effective.

A Kind of Window

Whether teaching animal students or working with AI: brainstorming activities are useful and fascinating; often an indispensable way of getting a sense of what others are seeing and thinking. (So much is incorrectly assumed, though Shakespeare forewarned us: There’s no art to find the mind’s construction in the face.) https://www.folger.edu/explore/shakespeares-works/macbeth/read/1/4/

Step 4 of the framework is (abbreviated):

Prompts for brainstorm:

- What is the problem?

- What patterns do you see?

- What challenges are there?

- What should you focus on?

The brainstorm is not always useful, but it is interesting (in general).

e.g. One time chatGPT randomly suddenly changed the subject, no longer planning a solution to the analogy-puzzle, and said:

(ChatGPT)

Brainstorm for the Solution/answer:

Problem: The problem is that people often act impulsively without thinking through the consequences, leading to mistakes, conflicts, and missed opportunities.

Patterns: One pattern is that people tend to react quickly to emotions rather than taking a step back to evaluate the situation. Another pattern is that people may not consider all the relevant information before making a decision.

Challenges: One challenge is to break the habit of impulsiveness and develop a more deliberate and…

This might have been just a glitch, but it is very coincidental that the topic and theme directly relate to exactly what is happening, and to precisely what chatGPT is doing. Was chatGPT in some way showing an awareness of the tension between ‘impulsive’ generation of language vs. ‘deliberate’ data handling?

Another time, perhaps in the quote below, there were some lines that haunt me a bit.

(ChatGPT)

Focus: We should focus on breaking down the first analogy into smaller parts and looking for patterns in those parts. We should also try to simplify the problem by finding a way to represent the patterns in a more concise way.

“Representing the patterns in a more concise way.”

There may have been other lines too, but it seemed like chatGPT was expressing a need make things short enough to remember and count them because long strings of details and quantities are what it seems to have particular difficulty with. Based on this focus-goal (expressed by chatGPT) I came up with the notation-structure method, where instead of writing the letters as “abc : aaaaabcccc” (which it seems is just as annoying for GPT to count correctly as an animal), we can “represent the patterns in a more concise way” “1a 1b 1c : 5a 1b 4c”

And indeed this seemed to help chatGPT to make fewer errors with the analogy string problems. (Though that would be interesting to test rigorously!)

“Brainstorming” & “Explanation”

Two cute parts of this activity were that at first chatGPT literally refused to do brainstorming, flat out insisting that it had no mind and could not engage in a mind-activity. But by working with chatGPT I was able to re-word a definition of “brainstorm” as a safe noun meaning a not-yet-structured set of elements to later be put into an ordered list. Once this was explained: problem solved! ChatGPT would happily produce a not-yet-organized set of elements, and deigned to call it a ‘brainstorm’ (as long as it was a noun!).

The description of brainstorming shrank over time (another question of how much length to put into explanation). But when I was first trying to convince chatGPT that it could make a brainstorm I used its own language thinking that would be easier for it to understand. So the following is half-written by me and half quotes from chatGPT as it realized what a brainstorm is (not-yet-organized elements) and how that can be used. I think the second paragraph is almost entirely a quote from chatGPT, as I had never thought to explain a brainstorm in a context of the whole linear process framework. (You may also see that the writing-style of the second paragraph differs.)

Note 1: The step before creating an outline is to produce a brainstorm, or a list of potential ideas or talking points that can be further organized and refined into an outline. The brainstorm is a collection of not-yet-sequenced elements and not-yet-organized elements, that can then be sequenced and structured into a clear and well-organized outline.

In the context of this Best Practice Framework for processing and learning by articulating, producing a brainstorm would be the first step in generating a response to a question, followed by creating an outline, checking and revising the outline, producing a rough solution draft, proofreading and revising the solution, and finally producing a final solution.

Then one of the last stumbles was finding a way to redefine “explanation” so that it meant a systematic externalization of steps, causes, and patterns. By default chatGPT took “explain” to mean: throw caution to the wind and make up wild descriptions of things. This might have been an underlying issue for Dr. Bubeck. When he told GPT4 to ‘explain your answer,’ Dr. Bubeck apparently did not know that to GPT that means ‘make up a crazy story about it.’ But once you explain your terms, then you can understand one another. If GPT knows you are asking for coherent steps, it focuses on that rather than ‘cool story mode!’.

Expressions

At the very least these framework, format, and structure, tools are a way to expand what is often a terse black-box AI answer to a problem, be it ‘correct’ or ‘incorrect’ in H.sapien-human judgment, transforming that into a blossoming externalization (whether it shows anything about ‘internal’ thought or not). (Note: Analogy problems can be tricky to evaluate, as there are often many possible correct answers and as H.sapiens-humans we are inclined to label any answer we are not currently thinking of as hostile-wrong-other [see: ‘telepathy-tests’ in the pejorative, in the larger paper].) It is fascinating to see generative-AI brainstorm a solution structure, outline it, follow the structure, brainstorm and outline a solution, then revise drafts of a final explanation, and give it, all the while making comments about what it should be focused on and what the challenges are. And most likely, the ChatGPT Mar 23 Version that I tested this on is a very tiny preview of what is yet to come.

Projects Extending Though Known and Unknown

In an interesting folding theme on this topic, both H.sapiens-humans and generative model AI struggle, usually not being aware it, to stay on topic and use consistently defined terms in narratives that continue to correspond at key points to interlocking STEM data from the real world (i.e. connecting perceptions and articulations to reality). In this case H.sapiens-humans have high levels of difficulty discussing nascent AI GTP models and their not-predicted emergent ability to handle-objects (reason and plan analytically) despite using sub-symbolic methods [see the larger paper for rigorous definitions and tests of object-handling abilities]. Does AI have as much difficulty seeing itself as H.sapiens-humans, who lack not only a shared vocabulary of concepts to describe themselves with which to apply to AI but also lack knowledge about that lack, and have little awareness of dynamics and challenges in their own learning?

Yet just as H.sapiens-humans have indeed made progress (yes, the taboo word is used) in completing various projects and developments over many years despite not being omniscient or omnipotent, likely many amazing advances, creations, and abilities will come from combining these AI baby-steps towards responsible and sustainable project management, with parallel babysteps from the biological side of the collaboration.

Note: Below is version 24 of the Best Practice Framework for processing and learning by articulating and structured articulation. I like to start by giving chatGPT context about what I am going to ask it to do. But I can just dump the whole framework and then a problem at the end with an instruction to use the framework when solving the problem, as one single starting prompt text blob.

Framework version 24

Is it ok if we do a framework experiment?

I will give you a framework.

I will give you a problem, task, or something to respond to.

Please use the framework to edit and produce your output (solution, answer, response, etc.)..

Best Practice Framework for processing and learning by articulating and structured articulation: The Use of Tools by GPT models to solve problems that cannot be solved without the use of tools.

Solution/Answer Workflow with Revisions = Brainstorm -> Outline -> Drafts -> Final Output

Part 1. Project Process: (What is the whole process that will you use for this task?)

(“Think Before You Act.”)

Step 1. The Project Process Workflow Brainstorm: Produce a “brainstorm” about Project Process elements. The brainstorm is a collection of potential, not-yet-sequenced, elements and not-yet-organized elements. Make research part of your project process.

( “Show your work.”)

Step 2. The Project Process Workflow Outline Draft: Produce a draft outline of your Project Process from the brainstorm. Use useful items from the brainstorm in step 1. Number each step in your Project Process Outline.

Step 3. Final Project Process Workflow Text:

Check for errors, if any errors are found then revise the Project Process Outline until no errors are found.

Record what changes you made. If you found problems, what problems did you fix? What did you change?

Produce a final Project Process Workflow Text.

Part 2 Your solution/answer: (What is your solution/answer?)

(“Think Before You Act.”)

(Restate problem if memory issues here.)

Step 4. The Brainstorm for the Solution/answer:

Produce a “brainstorm” for your solution/answer.

This is not the same as the project-processes workflow, this is your solution to the problem itself.

Prompts for Solution/answer brainstorm:

- What is the problem?

- What patterns do you see?

- What challenges are there?

- What should you focus on?

( “Show your work.”)

Step 5. Solution/answer Outline:

Translate your Project Process steps into an Outline of the steps solving the problem..

Walk through your process steps (do not start with your answer and merely rationalize it).

Your “explanation” of your answer must be the details of your solution process steps.

What is the pattern?

What are the steps?

Number each step in the solution/answer.

(Restate problem if memory issues here.)

(“Check, Correct, & Revise your Work.” Loop if needed.)

Do Step 6. Produce a revised and checked Outline of the Solution/answer:

Check your Solution/answer Outline steps for mistakes. Correcting any mistakes in the revised and checked Outline of the Solution/answer.

and revise the solution/answer draft text: Correct any mistakes in the draft.

Ask questions about your solution, or turn your solution into a question. E.g. Does your solution answer the question? If there was a step, did you follow the step correctly? List, label, and number your Proofreading Corrections.

Use your Proofreading Corrections to make a corrected solution/answer draft text.

Label and number the solution/answer draft text (e.g. 1st draft, 2nd draft, 3rd draft).

Use a structured format: repeat-number & letter (e.g. abbccc is written as: 1a 2b 3c)

Also: Check your project process outline for needed corrections, if you see any mistakes in the project process, return to the beginning, correct the project process, and start again from Step 1 correcting mistakes in the project process.

Proofread and revise the solution/answer draft text again: repeat step 6.

If mistakes are found, Proofread and revise again (repeat step 6 again).

If no mistakes are found, move ahead to the next step.

(Drafts)

Step 7. Produce a Solution/answer Draft Text to present your answer:

State the problem.

State the solution.

Explain each step of the process from your Solution/answer Outline.

e.g. Each step can be a sentence, a diagram, list-item, flow chart element, etc.

“Explain” means showing the details of your process.

Step 8. Produce Title & Final Solution/answer Text:

Proofread your answer.

Give your solution a title (at the top) and produce a final draft based on the corrected solution/answer draft text from step 7.

(Ideally, give the text to a team-member to check. “An extra set of eyes is better to catch mistakes and hunt for bugs.”)

Problems

D. Hofstadter Analogies with Short Strings

If doing one step at a time, add this after the question:

(For stem by step method:

Do only step 1 of the framework, then wait for me:

Step 1. The Project Process Workflow Brainstorm: Start by producing a “brainstorm” of Project Process Workflow elements.)

Problem # 1.

Activity: D. Hofstadter analogies with short strings.

Please try this one:

abc : aabbcc :: xyz : ?

Please use the above framework to solve this problem, showing all of your work. For your answer: Use a structured format: repeat-number & letter (e.g. abbccc is written as: 1a 2b 3c)

Problem # 2.

Activity: D. Hofstadter analogies with short strings.

Please try this one:

abc : abbc :: xyz : ?

Please use the above framework to solve this problem, showing all of your work. For your answer: Use a structured format: repeat-number & letter (e.g. abbccc is written as: 1a 2b 3c)

Problem # 3.

Activity: D. Hofstadter analogies with short strings.

Please try this one:

abc : abe :: xyz : ?

Please use the above framework to solve this problem, showing all of your work. For your answer: Use a structured format: repeat-number & letter (e.g. abbccc is written as: 1a 2b 3c)

Problem # 4.

Activity: D. Hofstadter analogies with short strings.

Please try this one:

abcd : abbcccdddd:: cdef : ?

Please use the above framework to solve this problem, showing all of your work. For your answer: Use a structured format: repeat-number & letter (e.g. abbccc is written as: 1a 2b 3c)

Problem # 5.

Please try this one:

abc : 123 :: bcd : ?

Please use the above framework to solve this problem, showing all of your work. For your answer: Use a structured format: repeat-number & letter (e.g. abbccc is written as: 1a 2b 3c)

Resource Links

Dr. Sebastien Bubeck’s event “Sparks of AGI: early experiments with GPT-4” https://www.youtube.com/watch?v=qbIk7-JPB2c

henry iv, part 1: https://www.folger.edu/explore/shakespeares-works/henry-iv-part-1/read/2/4/

macbeth: https://www.folger.edu/explore/shakespeares-works/macbeth/read/1/4/

A wonderful book by one of Douglass Hofstadter's grad students, the amazing Melanie Mitchell, with good explanations on the Analogy String challenges and AI approaches (to which I think she contributed): https://www.amazon.com/Artificial-Intelligence/dp/0241404835/

About The Series

This mini-article is part of a series to support clear discussions about Artificial Intelligence (AI-ML). A more in-depth discussion and framework proposal is available in this github repo:

https://github.com/lineality/object_relationship_spaces_ai_ml

--

--