Metacognition — a key to AGI

Cyril Sadovsky
𝐀𝐈 𝐦𝐨𝐧𝐤𝐬.𝐢𝐨
3 min readNov 25, 2023

Steps towards AGI (artificial general intelligence) are perhaps defined by the same factors that differentiate high IQ individuals from the general population — meta-thinking or metacognition.

In meta-thinking, one can think about one’s own thinking, one’s ways of learning, knowing, remembering and understanding, and can apply one’s thoughts to “big picture” or “non-linear” vision and insight.

Jennifer Harvey Sallin (High, Exceptional & Profound Giftedness — InterGifted)

An artificial general intelligence (AGI) is a hypothetical type of intelligent agent. If realized, an AGI could learn to accomplish any intellectual task that human beings or animals can perform. Alternatively, AGI has been defined as an autonomous system that surpasses human capabilities in the majority of economically valuable tasks Creating AGI is a primary goal of some artificial intelligence research and of companies such as OpenAI, DeepMind, and Anthropic. AGI is a common topic in science fiction and futures studies.

Artificial general intelligence — Wikipedia

The simulation problem

One of the biggest hurdles that current AI tech is facing is not having enough data to be trained on. So then the question arises, why are human experts still better and more capable in their respective fields of work than GenAI which was trained on the whole internet?

We as humans have a special ability. We are able to think about the future. To be more exact, we are able to simulate future states. By running these simulations we are not only able to predict what may happen, but we also learn from this experience. In other words, we generate our own synthetic data on which we train our brains.

It’s in a way like playing computer games. Before I got my driving license, I got a racing wheel for my computer and played car racing games. Afterwards, when I sat in a real car, I was able to drive the car without thinking about it.

This ability wasn’t accessible to AIs, so they need a lot of real world data to be trained on. Until now.

metacognition

Metacognition

The way of thinking that highly intelligent people utilize to approach difficult problems is by “spawning” separate agents or entities inside their own overarching cognition. These agents then try to find different paths towards a solution. These paths eventually converge on one solution in the overarching cognition state.

It seems, that the current potential breakthrough in AI tech is this exact realm of metacognition.

I really recommend watching these three video, where the authors explain why metacognition may be the key for AGI:

What is Q*? Speculation on how OpenAI’s Q* works and why this is a critical step towards AGI — YouTube

OpenAI’s Q* is the BIGGEST thing since Word2Vec… and possibly MUCH bigger — AGI is definitely near — YouTube

Q* Q Star Hypothesis | Is this hybrid of GPT and AlphaGO? AI self-play and synthetic data 🔥 — YouTube

Using GPT-4 there already were some prompting techniques which alluded to the potential of metacognition: Chain-of-Thought Prompting for Complex Multi-step Reasoning. But the driver in the seat was always the prompter — a human.

In the context of an AI, it makes absolute sense that the AI is given the ability to spawn or create separate AI agents which are trying to find the best way towards a common goal. For example to cure cancer. Furthermore, these agents are even able to communicate with each other and learn from each other.

Thus giving the AI the ultimate human ability:

  1. to simulate a potential future and learn from it
  2. spawn smaller AI agents (as separate sub-processes) for one common goal

If OpenAI was able to unlock the metacognition ability for an AI, it seems perfectly plausible that a machine which doesn’t need to sleep or eat or doesn’t have health hinderances etc. is able to come up with new type of mathematics or architecture that may make the AI even more capable.

Personal note

The important concept that we have to remember here is, that if we create an AGI it may be humanity’s last achievement.

Because although for us the gap between AGI and super intelligence may be huge, for a machine it may be just a small step. A machine is not slowed down by any of our biological drawbacks. During an iterative process of self improvement it may spiral out of our mental grasp very quickly.

--

--