A Short History of Foundational AGI Theories

SingularityNET
SingularityNET
Published in
9 min readAug 1, 2024

The dream of Artificial General Intelligence (AGI), a machine with human-like intelligence, is something that we can trace back to early computational theories in the 1950s, when pioneers like John von Neumann explored the possibilities of replicating the human brain’s functions.

Today, AGI represents a paradigm shift from the wide variety of narrow AI tools and algorithms we use today that excel at specific tasks. A shift toward a form of intelligence that can learn, understand, and apply its knowledge across a wide range of tasks at or beyond the human level.

While the precise definition or characterization of AGI is not broadly agreed upon, the term “Artificial General Intelligence” has multiple closely related meanings, referring to the capacity of an engineered system to:

· Display the same rough sort of general intelligence as human beings;

· Display intelligence that is not tied to a highly specific set of tasks;

· Generalize what it has learned, including generalization to contexts qualitatively

· Very different than those it has seen before;

· Take a broad view, and flexibly interpret the tasks at hand in the context of the world at large and its relation thereto.

The journey to AGI has been marked by numerous theories and conceptual frameworks, each contributing to our understanding and aspirations of this seemingly-imminent revolution in technology.

Let’s take a look back and explore some of the core theories and conceptualizations that have, over the long haul, given birth to the concept we know today as AGI.

Earliest Conceptualizations of AGI

Turing and the Turing Test (1950) Alan Turing’s seminal paper, “Computing Machinery and Intelligence,” introduced the idea that machines could potentially exhibit intelligent behavior indistinguishable from humans.

The Turing Test, which evaluates a machine’s ability to exhibit human-like responses, became a foundational concept, emphasizing the importance of behavior in defining intelligence.

Soon after, in 1958, John von Neumann’s book, “The Computer and the Brain,” explored parallels between neural processes and computational systems, sparking early interest in neurocomputational models.

These initial conceptualizations gave birth to the era of Symbolic AI

In the 1950s through 60s, Allen Newell and Herbert A. Simon proposed the Physical Symbol System Hypothesis, asserting that a physical symbol system has the necessary and sufficient means for general intelligent action.

This theory underpinned much of early AI research, leading to the development of symbolic AI, which focuses on high-level symbolic (human-readable) representations of problems and logic.

By the end of the 1960s, Marvin Minsky and Seymour Papert’s book, “Perceptrons,” critically examined early neural network models, highlighting their limitations. This work, while initially seen as a setback for connectionist models, eventually spurred deeper research into neural networks and their capabilities, influencing later developments in machine learning.

In 1956, Newell and Simon developed the Logic Theorist, considered by many to be the first real AI program. It was able to prove theorems in symbolic logic, marking a pretty significant milestone in AI R&D. And a bit later, in 1958, John McCarthy developed LISP, a programming language that became fundamental for AI research at the time.

In the 70s, the early promises of AI faced significant setbacks. Expectations were high, but the technology could not deliver on some of the grandiose benefits it was promised.

Systems struggled with complex problems, and the limitations of early neural networks and symbolic AI became apparent. Due to the lack of progress and overhyped expectations, funding for AI research was reduced significantly. This period of reduced funding and interest is referred to as the first AI winter.

Neural Networks & Connectionism

In the 1980s, a resurgence in neural network research occurred.

The development and commercialization of expert systems brought AI back into the spotlight. These systems, which used knowledge bases and inference rules to mimic human expertise in specific domains, proved to be practically useful in industries like medicine, finance, and manufacturing.

Not to mention, advances in computer hardware at the time provided the necessary computational power to run more complex AI algorithms. This led to new techniques and algorithms, increased commercial interest, and increased investment in AI products.

The resurgence was driven by David Rumelhart, Geoffrey Hinton, and Ronald Williams’ development of the backpropagation algorithm.

This breakthrough enabled multi-layered neural networks to learn from data, effectively training complex models and rekindling interest in connectionist approaches to AI.

John Hopfield introduced Hopfield networks in 1982, demonstrating how neural networks could solve optimization problems. Between 1983 and 1985, Geoffrey Hinton and Terry Sejnowski developed Boltzmann machines, further advancing neural network theory by demonstrating the potential of neural networks to solve complex problems through distributed representations and probabilistic reasoning.

The Advent of Machine Learning and Deep Learning

Hebbian Learning and Self-Organizing Maps (1949, 1982)

Donald Hebb’s principle, often summarized as “cells that fire together, wire together,” laid the foundation for unsupervised learning algorithms. Finnish Professor Teuvo Kohonen’s self-organizing maps in 1982 built on these principles, showing how systems could self-organize to form meaningful patterns without explicit supervision.

Deep Learning and the ImageNet Breakthrough (2012)

The ImageNet breakthrough in 2012, marked by the success of AlexNet, revolutionized the field of AI and deep learning. Developed by Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton, AlexNet’s deep convolutional neural network architecture, featuring innovations like ReLU activation, dropout, and GPU utilization, achieved a top-5 error rate of 15.3%, vastly outperforming previous models.

This success demonstrated the power of deep learning for image classification and ignited widespread interest and advancements in computer vision and natural language processing.

Cognitive Architectures and Integrated AI

SOAR and ACT-R (1980s)

Cognitive architectures like SOAR (State, Operator, And Result) and ACT-R (Adaptive Control of Thought-Rational) emerged as comprehensive models of human cognition. Developed by John Laird, Allen Newell, and Paul Rosenbloom, SOAR aimed to replicate general intelligent behavior through problem-solving and learning. ACT-R, developed by John Anderson, focused on simulating human cognitive processes, providing insights into memory, attention, and learning.

Theories of Embodied Cognition (1990s)

Theories of embodied cognition emphasized the role of the body and environment in shaping intelligent behavior. Researchers like Rodney Brooks argued that true intelligence arises from the interaction between an agent and its environment, leading to the development of embodied AI systems that learn and adapt through physical experiences.

Modern AGI Research and Theories

Universal Artificial Intelligence and AIXI (2005)

Marcus Hutter’s Universal Artificial Intelligence theory and the AIXI model provided a mathematical framework for AGI. AIXI, an idealized agent, is designed to achieve optimal behavior by maximizing expected rewards based on algorithmic probability. While AIXI is computationally infeasible, it offers a theoretical benchmark for AGI research.

OpenCog Classic (2008)

One of the significant developments in AGI theory is the creation of OpenCog, an open-source software framework for artificial general intelligence research. Founded by Ben Goertzel, who coined the term AGI, OpenCog Classic focuses on integrating various AI methodologies, including symbolic AI, neural networks, and evolutionary programming. The aim is to create a unified architecture capable of achieving human-like intelligence.

Neural-Symbolic Integration (2010s)

Efforts to integrate neural and symbolic approaches aimed to combine the strengths of both paradigms. Neural-symbolic systems leverage the learning capabilities of neural networks with the interpretability and reasoning of symbolic AI, offering a promising pathway toward AGI.

Current Frontiers in AI & AGI

2000s-2010s: Engineering Specialized AI Capabilities Algorithmic architectures have displayed superhuman proficiency in specialized gaming tournaments, image classification, statistical predictions, etc., but remain constrained in generalizability and lack multi-domain adaptability uniformly.

The 2020s: Large Language Models Foundation models like GPT-3 show initial promise in text generation applications, displaying some cross-contextual transfer learning. However, they are still limited in full-spectrum reasoning, emotional intelligence, and transparency, highlighting challenges towards safe integrations responsibly.

The 2020s: OpenCog Hyperon Building on the foundations of OpenCog Classic, OpenCog Hyperon represents the next generation of AGI architecture. This open-source software framework synergizes multiple AI paradigms within a unified cognitive architecture, propelling us toward the realization of human-level AGI and beyond. With the recent release of OpenCog Hyperon Alpha, SingularityNET has created a robust framework for collaborative innovation within the AGI community.

For Dr. Ben Goertzel, everything has been clear since the beginning.

He believes most of the key ideas out there in the commercial field were already in existence in the 1960s and 1970s since that’s when the first practical AI systems were rolled out. With that said, AI has come a long way since its inception in the mid-20th century.

For example, in the 1960s, there were already neural networks, including deep neural networks with multiple layers of simulated neurons attempting to simulate brain cells. There were also automatic logical reasoning systems using formal logic to draw conclusions based on evidence.

He also discussed the current state of AI, highlighting how AI systems are capable of doing incredible things, even if they are not yet at the human level: “What’s happening now is a lot of processing power and a lot of data are being brought to bear to make these old AI approaches achieve new levels of success.”

Large language models (LLMs) are a good example of this. They can generate text, translate languages, write different kinds of creative content, and answer your questions in an informative way but they can only “be very smart in the context of carrying out one of these narrow functions.” So what is next?

Dr. Goertzel has stated, “It’s intuitively clear AGI is now within reach, and it is likely to be achieved within the next few years.” This is because we have a number of different approaches to AGI that seem plausible right now. One approach being pursued by research teams and companies like OpenAI is to upgrade LLMs that have already been shown to be capable of some impressive things. Another approach is to connect different kinds of deep neural networks together. A third approach is to connect neural nets with other sorts of AI tools together in a distributed metagraph-based architecture like OpenCog Hyperon.

He reminds everyone that achieving AGI poses some interesting social, economic, and ethical issues, but that he’s “not as worried about those as some people are,” because if we can keep the deployment of AGI decentralized, the governance participatory and democratic, we can have a lot of faith that AGI will grow up to be beneficial to humanity and help us lead more fulfilling lives.

At SingularityNET, we are working hard to push the boundaries of what’s currently thought possible in AGI.

But one thing is clear — we are standing on the shoulders of giants. From the early days of Turing and von Neumann to the pioneering work in symbolic AI, neural networks, and deep learning, each milestone has brought us closer to realizing the dream of AGI.

As we continue to push these boundaries with large language models and integrated cognitive architectures like OpenCog Hyperon, the horizon of AGI draws nearer. The path is fraught with challenges, yet the collective effort of researchers, visionaries, and practitioners continues to propel us forward.

Together, we are creating the future of intelligence, transforming the abstract into the tangible, and inching ever closer to machines that can think, learn, and understand as profoundly as humans do.

About SingularityNET

SingularityNET was founded by Dr. Ben Goertzel with the mission of creating a decentralized, democratic, inclusive, and beneficial Artificial General Intelligence (AGI). An AGI is not dependent on any central entity, is open to anyone, and is not restricted to the narrow goals of a single corporation or even a single country. The SingularityNET team includes seasoned engineers, scientists, researchers, entrepreneurs, and marketers. Our core platform and AI teams are further complemented by specialized teams devoted to application areas such as finance, robotics, biomedical AI, media, arts, and entertainment.

Decentralized AI Platform | OpenCog Hyperon | Ecosystem | ASI Alliance

Stay Up to Date With the Latest SingularityNET News and Updates:

--

--

SingularityNET
SingularityNET

The world's first decentralized Artificial Intelligence (AI) network