Part 1 — AGI ≠ AHI: Mathematical Explanation of What AGI is Not

Freedom Preetham
Autonomous Agents
Published in
7 min readNov 21, 2023

I see the biggest confusion and debate about AGI (Artificial General Intelligence) is that people automatically think that it is AHI (Artificial Human Intelligence).

I have tried to make the following statement clear in this blog:

AGI ≠ AHI

Human Intelligence is extremely superior to possibly conceive. We are still exploring what Human Intelligence means in all it’s dimensions which gave rise to sentience and consciousness. These are uncomputable or intractable problems and cannot be achieved on the current “hardware”. AHI is not achievable.

I have tried to provide a narrative that seeks to unravel these concepts, firmly anchoring the discussion in a mathematical framework to highlight the distinct characteristics and limitations of AGI, as opposed to AHI. I have kept the Math extremely light to make this generally readable.

I will take a separate stab at what is AGI in a separate blog, this is more about what it is NOT.

Why AGI ≠ AHI ?

The Incomprehensible Complexity: Human intelligence, a complexity of cognitive, emotional, and experiential threads, stands as a paradigm far beyond the current computational grasp. Its dimensions extend into the realms of consciousness and sentience, which elude algorithmic replication. This complexity can be loosely quantified by a multidimensional function H mapping a range of cognitive and emotional factors to levels of human intelligence.

In computational science, certain problems are inherently complex and resist exact solutions within a feasible time frame, leading to the use of algorithms that yield approximate results. Among these, a distinct category is the undecidable problems, for which no algorithm can provide a definitive “yes” or “no” answer across all possible inputs, even with theoretically unlimited computational power and time.

In April 1936, Alonzo Church published his proof of the undecidability of a problem in the lambda calculus. Turing’s proof was published later, in January 1937. Since then, many other undecidable problems have been described, including the halting problem which emerged in the 1950s.

Halting Problem and Computational Limits: The quest for an all-encompassing AI system mirrors the computational complexities of the “halting problem.” This problem, a cornerstone in computational theory, is expressed through Turing’s theorem:

There exists no algorithm A such that A(M,w) decides whether machine M halts on input w.

Entscheidungsproblem: The greatest of the greatest like Marvin Minsky, Kurt Gödel, Alonzo Church, Barkley Rosser, Emil Post, Alan Turing etc.. have tried their hands on the ‘Halting Problem’ and concluded that the resulting ‘Entscheidungsproblem’ is an unsolvable ‘decision’ problem.

In the fields of mathematics and computer science, the Entscheidungsproblem, which translates to ‘decision problem’ from German, was introduced by David Hilbert and Wilhelm Ackermann in 1928. This challenge involves finding an algorithm that can take any given statement as its input and determine, with a “yes” or “no” answer, if the statement is universally valid — meaning it holds true in every structure that adheres to the specified axioms.

Their studies, particularly Turing’s analysis of the Entscheidungsproblem and Gödel’s Incompleteness Theorems, highlight the inherent computational and logical limitations:

Gödel’s First Incompleteness Theorem:

For any consistent formal system F, there exist truths about F that are unprovable within F

Turing proves three problems undecidable: the “constraint satisfaction problem”, the “printing problem”, and the “Entscheidungsproblem”.

Now, there are way’s to circumvent (approximate) these challenges in the current AGI systems, which is outside the scope of this discussion.

Is AGI a Turning Machine?

AGI (attempted) Definition: For the purposes of this discussion, let’s consider a working definition of AGI, though it may not capture all nuances. AGI, or Artificial General Intelligence, is a theoretical concept of a machine designed to understand, learn, and intelligently respond to a variety of challenges, ‘almost’ paralleling human cognitive abilities. Unlike systems confined to specialized tasks, AGI is envisaged to adapt its intelligence broadly across diverse domains.

Turing Machine Definition: A Turing Machine is a mathematical model of computation that defines an abstract machine, which manipulates symbols on a strip of tape according to a set of rules. It’s a fundamental concept in computer science used to understand the limits of what can be computed.

AGI as a Turing Machine: In theory, AGI, like other computational systems, could be considered a form of Turing Machine since it processes information algorithmically. However, whether an AGI would be fully encompassed by the theoretical limitations of a Turing Machine (such as being bound to discrete, symbolic computation) is a matter of debate. AGI might incorporate aspects of learning, understanding, and problem-solving that go beyond traditional algorithmic computation.

Will AGI have the “Halting Problem” constraints?

Halting Problem: As we saw, the Halting Problem is a decision problem in computability theory. It states that there is no general algorithm that can determine for every Turing Machine and input whether the machine stops running (halts) or continues to run indefinitely.

AGI and the Halting Problem: If AGI is considered under the framework of Turing Machines, it would inherit the constraints of the Halting Problem. This means that there would be certain problems or decisions that the AGI would not be able to conclusively resolve. However, the practical implications of this on AGI’s functionality are speculative and depend on the nature of AGI’s design and operation.

AGI might employ heuristics, approximations, or probabilistic reasoning to navigate problems akin to the Halting Problem, which are undecidable in a strict algorithmic sense.

(I promise to revisit all this with more scientific, formal and mathematical rigor in a different blog. Again, I am trying to keep the math out of this blog for general reading)

AGI as a Composite Framework

Since, mathematically, trying to achieve monolithic AI models which can do “everything” may encounter a “halting problem”, we must think of AGI as a stack of AI models that are governed by allocators.

Each AI model is designed for specific intelligence aspects that can achieve a percentile capability of what humans are able to do on clearly stated features of learning, reasoning, logic etc.

This design aligns with a mathematical formulation that approximates human capabilities through a composite of specialized functions:

where Mi​ is a model specializing in an aspect of intelligence

Specialization and Allocators in AGI

As AGI’s design diverges from the idea of a singular AI entity, adopting a modular “Mixture of Experts” (MoE) framework is important. This involves a set of specialized models, each adept at specific tasks, coordinated by an allocator function A:

Here, wi​(x) represents the weight assigned to the ith model Mi​ for a given input x.

DeepMind’s AGI Taxonomy: Mathematical Structuring

DeepMind’s AGI taxonomy, presented in their publication, offers a structured approach to categorizing AGI’s capabilities. It is a foundational step, but further mathematical scrutiny and refinement are required for a more accurate representation.

Hierarchical Taxonomy and Its Mathematical Implications

The need for a hierarchical, level-based taxonomy in AGI becomes apparent, given the complexity of mimicking human-like features. This taxonomy can be mathematically represented as a function mapping AGI levels to their respective capabilities and complexities:

In this expression, τ(L) represents the AGI Taxonomy Function for a given level L. C(L) denotes the capabilities at level L, and Ω(L) symbolizes the complexity at that level. The braces {} are used to denote that the function returns a set of values, in this case, the capability and complexity associated with each level of AGI.

The Crucial Role of Expert Discourse

The dialogues surrounding AGI vs AHI, imbued with complex mathematical concepts and computational theories, necessitate a focused approach within the scientific community (far away from irrational public discussions leading to arm chair criticism). The intricacies of these discussions are best addressed by experts in mathematics and advanced AI research, ensuring clarity and scientific rigor.

The delineation of AGI from AHI is a topic steeped in computational and mathematical complexity. AGI, a synergy of specialized models, each contributes to an aspect of intelligence but ‘may’ not replicate the complete spectrum of human cognition that leads to consciousness.

As AI continues to evolve, it is imperative that the discourse and advancements in AGI remain grounded in a mathematically comprehensive and scientifically robust framework. This approach ensures a clear understanding of AGI’s capabilities and limitations, distinguishing it from the often misunderstood concept of AHI.

In the next part, I am covering “What is AGI? Can it achieve human cognition?

--

--