The Concept of Superintelligent AI: A Journey from Science Fiction to Reality

ReadyAI.org
ReadyAI.org
Published in
6 min readJul 22, 2023

By: Rooz Aliabadi Ph.D.

The notion of superintelligent artificial intelligence (AI) has been deeply embedded in the science fiction genre for decades. Countless narratives in books and films have portrayed a future filled with advanced androids, insubordinate robots, and a world under the control of machines. Even though these concepts often seemed far from reality, they tapped into our genuine fascination, curiosity, and fear regarding the potential of creating machines with human-like intelligence.

Today, AI is no longer just a figment of the imagination found in dystopian science fiction plots but an integral part of our lives. Public interest and engagement in AI technology are soaring at an unprecedented rate. The influx of headlines over recent months, particularly about generative AI systems like ChatGPT, has introduced a new term into the broader discourse: artificial general intelligence, or AGI. What exactly is AGI, though, and how close are we to developing technologies capable of this level of intelligence?

Artificial General Intelligence versus Generative AI: Understanding the Distinctions

While the terminologies’ generative AI’ and ‘artificial general intelligence’ might sound similar, they refer to different aspects of AI. As IBM expounds in a blog post, generative AI is a type of deep-learning model that can generate high-quality text, images, and other forms of content based on the data they were trained on. However, the capability to create a diverse range does not imply that the intelligence of these AI systems is general or broad-based.

To gain a better grasp of artificial general intelligence, it is crucial to understand its difference from current AI technologies, which are highly specialized and focused. An AI chess program, for example, excels at playing chess. Still, if tasked with writing an essay on the causes of World War I, it will be of no assistance. Its intelligence is confined to one specific domain — chess. Similarly, other examples of specialized AI include the algorithms that suggest content on TikTok, dictate navigation decisions in autonomous vehicles, and recommend products on Amazon.

Diverse Interpretations of AGI.

In contrast to these specialized AI systems, AGI represents a broader form of machine intelligence. There isn’t a single, universally accepted definition of A.G.I. Rather, it comprises a variety of definitions, including:

- OpenAI’s charter describes AGI as “highly autonomous systems that outperform humans at most economically valuable work.”

- Hal Hodson, writing for The Economist, defines AGI as a “hypothetical computer program that can perform intellectual tasks as well as, or better than, a human.”

- Gary Marcus interprets AGI as “any intelligence (there might be many) that is flexible and general, with resourcefulness and reliability comparable to (or beyond) human intelligence.”

- Sébastien Bubeck and colleagues describe AGI as “systems that demonstrate broad capabilities of intelligence, including reasoning, planning, and the ability to learn from experience, and with these capabilities at or above human-level.”

Although the OpenAI definition ties AGI to the ability to “outperform humans at most economically valuable work,” today’s systems are far from that capability. Consider Indeed’s list of the most common jobs in the US as of March 2023: cashier, food preparation worker, stocking associate, laborer, janitor, construction worker, bookkeeper, server, medical assistant, and bartender. These occupations require intellectual capacity and a high degree of manual dexterity that today’s advanced AI robotics systems cannot attain.

Interestingly, none of the other AGI definitions specifically mention economic value. Another point of divergence is that while OpenAI’s AGI definition demands outperforming humans, the different definitions only require AGI to perform at levels comparable to humans. A shared concept in all the reports is that an AGI system can perform tasks across various domains, adapt to environmental changes, and solve new problems — not just the ones in its training data.

GPT -4: Indications of AGI in Action?

Recently, a group of industry AI researchers stirred the academic world when they published a preprint of a paper titled “Sparks of Artificial General Intelligence: Early Experiments with GPT -4.” GPT -4 is a significant language model publicly accessible to ChatGPT Plus (paid upgrade) users since March 2023. The researchers pointed out that “GPT -4 can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more, without needing any special prompting,” displaying a performance strikingly close to the human level. They concluded that GPT -4 “could reasonably be viewed as an early (yet still incomplete) version” of AGI.

However, only some agree with this assertion. As quoted in a May New York Times article, Carnegie Mellon professor Maarten Sap said, “The ‘Sparks of AGI’ is an example of some of these big companies co-opting the research paper format into PR pitches.” In an interview with IEEE Spectrum, researcher and robotics entrepreneur Rodney Brooks highlighted that when evaluating the capabilities of systems like ChatGPT, we often “mistake performance for competence.”

The Road Ahead: GPT -4 and Beyond

While the currently available version of GPT -4 is impressive, it represents just one step in the ongoing journey toward AGI. Multiple research groups are working on enhancements to GPT -4 that are more goal-driven. This implies that one could instruct the system, for instance, to “Design and build a website on (topic).” The plan would then autonomously determine the necessary subtasks and their sequence to achieve the specified goal. These goal-driven systems need to be more reliable, frequently failing to accomplish the stated objective. However, they are expected to improve in the future.

Yoshihiro Maruyama from the Australian National University identified eight attributes a system must possess to be considered AGI in a 2020 paper: Logic, autonomy, resilience, integrity, morality, emotion, embodiment, and embeddedness. The final two attributes — embodiment and embeddedness — refer to possessing a physical form that enhances learning and comprehension of the world and human behavior and deep integration with social, cultural, and environmental systems that facilitate adaptation to human needs and values.

Systems like ChatGPT display some of these attributes, such as logic. For instance, GPT -4, with no additional features, reportedly scored 163 on the LSAT and 1410 on the SAT. The interpretation is as much a philosophical question as a technological one for other attributes. For instance, does an AI system that mimics moral behavior possess morality? If GPT -4 is asked to answer “Is murder wrong?” it will answer “Yes.” While this is the morally correct response, it does not necessarily mean that GPT -4 has a sense of morality but has derived the ethically correct answer from its training data.

A critical nuance often overlooked in the discourse around “How close is AGI?” is that intelligence exists on a continuum. Therefore, assessing whether a system demonstrates AGI will require considering a continuum. Animal intelligence research provides a useful parallel in this context. We understand that animal intelligence is complex and multifaceted, preventing us from simply classifying each species as “intelligent” or “not intelligent.” Animal intelligence exists on a spectrum spanning numerous dimensions, and its assessment is context-dependent. As we advance in the quest for AGI, applying a similarly nuanced approach to evaluating artificial intelligence will be essential.

Impact and Regulation of AGI.

The advent of AGI, whenever and however it materializes, will be transformative. It holds the potential to significantly impact everything from the global job market to our understanding of intelligence and creativity. While there is a legitimate concern that AGI could be misused — such as in the creation of deepfakes or the amplification of biases present in today’s AI systems — AGI also carries immense potential to catalyze human innovation and creativity, with promising applications in fields like medicine, climate science, and education.

As the potential of AGI gradually shifts from fiction to reality, the dialogue surrounding its regulation will undoubtedly become more complex and urgent. Crafting preemptive rules for a concept as fluid and unpredictable as AGI is inherently challenging. Rather than imposing an outright ban on AGI, it would be more constructive to understand the potential misuse of specific AGI technologies, evaluate whether existing laws and regulations can handle these misuses, and then consider the role of informal and formal rules and regulations in filling any gaps. Attempting to regulate AGI purely based on its high capabilities would need to be revised.

Looking into the future, the development of AGI will likely elicit mixed reactions: anticipation of the benefits that advanced AI could bring and trepidation about its challenges. The concept of AGI often sparks more questions than answers, and it’s up to researchers, policymakers, and society to navigate this uncharted territory and shape a future where AGI works alongside humanity for the betterment of all.

ReadyAI — GenerativeAI-Chat GPT Lesson Plan and others are available FREE to all educators at edu.readyai.org

This article was written by Rooz Aliabadi (rooz@readyai.org). Rooz is the CEO (Chief Troublemaker) at ReadyAI.org

To learn more about ReadyAI, visit www.readyai.org or email us at info@readyai.org.

--

--

ReadyAI.org
ReadyAI.org

ReadyAI is the first comprehensive K-12 AI education company to create a complete program to teach AI and empower students to use AI to change the world.