Geek Culture
Published in

Geek Culture

Data Structures for Artificial General Intelligence (AGI)

Yeah, I know… symbolic AI isn’t cool… yet!

Photo by Jason Strull on Unsplash

Symbolic AI (or “hand-coded” AI) has gotten a bad reputation in the age of neural networks and machine learning, where it’s assumed that all knowledge can be extracted (bottom-up) from big labeled datasets. Why bother to hand-code symbols, rules and relationships when these already exist in the data?

However, artificial general intelligence (AGI) is different. If the goal of AGI is to build a computer that can think on its own and solve complex problems, symbolic AI is the way to go. Sure, neural networks still have a role to play in pattern recognition, prediction, reward-seeking, and optimization. But humans have complex motivations and behaviors — ambition, greed, caring, empathy, fear of being judged, and need for social acceptance — that neural networks simply can’t deal with. In our minds, we need ways to represent, process, and simulate/model the world (with all its objects, actors, space, time and causality), independent of raw data and sensory inputs.

Here, then, is a potential set of data structures the mind could use to represent our internal thoughts, perceptions, plans, predictions, and motivations. Operating on these mental representations is a set of mental rules, both learned and innate:

Objects (things, stuff)

  • Ability to represent physical things that we sense in the environment, along with their properties and classification (e.g., IS-A car). When I close my eyes, the car in front of me continues to exist in my mental simulation.
  • Objects can be hierarchical (e.g., PART-OF a car is an engine, PART-OF an engine is a spark plug, etc.)
  • My mental rules generate these “object” data structures from raw sense perception, and the “objects” persist in memory, and later serve to recognize, identify, associate and predict instances of themselves. I assume each mental representation gets a GUID or some sort of unique encoding, to allow it to be referenced by mental rules

Actors (living objects)

  • A data structure that represents myself and other people (and animals) and their agendas, motives, and behaviors. An actor is a special type of object that has its own internal motivations and agency (free will)


  • An encoding (or array of numbers) that represents an attribute (HAS-PROPERTY) of an object, e.g., color, shape, texture, sound, smell
  • Can be hierarchical, e.g., pixels form lines, lines form shapes, etc.
  • Each property comes with a set of mental rules to recognize / identify / associate / predict the property


  • A named relation between mental representations, e.g. IS-A, PART-OF, etc. (a property is a relationship as well)
  • Verbs — e.g., owns, hates, sells, runs— are also relationships.
  • A relationship is usually associated with an event/belief — when and why you believe the relationship is true, with a confidence level (probability) in the relationship


Metadata around an event or belief, including:

  • What is believed: object/actor + relationship + object/actor, e.g., “Joe owns a ball” [optional: negation, “Joe doesn’t own a ball.”]
  • Probability of the belief
  • Who (actors/objects) is involved in the belief
  • When — time that the event/belief happened
  • Where — location of the event/belief
  • How many — quantity of objects in the event/belief
  • Attribution — how (from whom) you learned about the event/belief
  • Why — cause of the event/belief


  • A relationship between two events, where the first one “causes” the second. — e.g., “Bill pushed Sue, causing Sue to fall down” [optional: negation]
  • Indication whether the cause was accidental or on purpose — very important in human affairs!
  • Causes can be hierarchical
  • Maintain a Bayesian probability of the “causing event”


  • “When” an event happens. Can be absolute time (milliseconds since 1900), relative, or qualitative (today, yesterday, a few minutes ago, since last full moon, time it takes to walk to the neighbor’s, etc.)
  • Times can be relative and combined, e.g., a long time ago, but just a few seconds after that other event that also happened a long time ago
  • Time can represent specific timepoints or durations, e.g., time between full moons


  • Location of an event, can be absolute (GPS coordinates), relative, or qualitative (within my grasp, within a short walk)
  • Locations can be relative and combined, e.g., far away, a few feet from that other location
  • Locations can represent points in space, areas, 2D/3D approximations (x is behind y but in front of z), and trajectories in space


  • Quantities of objects, can be absolute (0, 1, 2, 3) or qualitative (none, one, two, three, a few, many)
    Quantities can be combined, e.g., many, but fewer than the other quantity


  • Represent the absence of something (usually an event), or counterfactual.
  • I don’t have… It didn’t happen… It’s not the case that… What if it happened?


  • A series of steps, or an encoding or array of numbers representing low-level actions — e.g., flex a muscle.
  • A list of possible actions to perform toward a goal, ranked by potential reward or utility. May be hierarchical (plans within plans)


  • A set of mental rules/functions that determine progress of a plan toward a reward or utility, including the ability to recognize (albeit fuzzily) the desired end state and assess errors in accomplishing it.
  • Human motivations include: ambition, greed, lust, life/freedom, benevolence, caring, social conformity, fear of being judged, eating, evading predators, seeking shelter, reproducing, etc.
  • An instinct or innate behavior is simply a prior goal. The familiar “trolley car problem” illustrates that humans are innately terrified at choosing to cause harm to others, even for good reason — which could also explain deference to authority (let someone else decide the matter!).


  • The current set of mental representations in our “conscious” focus (working memory)

The mental representations above don’t just link together directly like pointers in a C program. They’re more like spherical magnets floating in the mental soup, sometimes bumping into each other and getting attached, sometimes falling apart and even getting lost.

Mental rules act on mental representations, but they also float around in the mental soup, constantly on the lookout for relevant mental representations on which to act (like CSS selectors). Mental rules can generate new mental representations, or locate and re-write existing ones.

Mental rules are lazy. They only generate new mental representations on demand. For example, if Socrates is a man, and all men are mortal, we can imply that Socrates is mortal. But we don’t need a “truth maintenance” system in our mind to generate and maintain consistency of all possible implications of Socrates’ mortality, all the time. We can simply ponder the implications later — on demand — when needed. For now, we just record the fact verbatim, unconsidered.

One final thought. For an innate mental rule to be possible, that rule must be able to reference mental representations using a pre-determined, ancient naming convention, before those representations and skills are learned (i.e., “late binding”) For example, the rules to CRAVE-SOCIAL-ACCEPTANCE and FEAR-BEING-JUDGED and FIND-POTENTIAL-MATE, if they exist, must rely on concepts (society, culture, human relationships) that we barely know when we’re born, yet fully “expect” to learn later. Somehow, as we learn these concepts, our mental representations are tagged with pre-defined, invariant names (known to our DNA) to allow innate mental rules to find them and act on them later in life.



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Rob Vermiller

Rob Vermiller

A computer scientist with a passion for AI and Cognitive Science, and author of the Programmer's Guide to the Brain.