The Future of Superintelligence: A Deep Dive into AGI Predictions and Potential Risks

Lawrence Teixeira
10 min readJun 10, 2024

--

Welcome to the exciting world of Artificial General Intelligence (AGI) and the journey toward superintelligence. As we navigate through rapid technological advancements, it’s crucial to understand the predictions and potential risks associated with this transformative field. In this blog post, we’ll delve into the future of superintelligence, drawing insights from leading experts and recent developments.

AGI by 2027: A Believable Reality

Leopold Aschenbrenner, a former researcher at OpenAI, presents a striking vision for the future of AGI. He predicts that by 2027, AGI will become a reality, with AI systems achieving intelligence on par with PhD-level researchers and experts. This prediction is based on the significant advancements in AI from GPT-2 to GPT-4, which took AI from preschool-level capabilities to those of a smart high schooler within four years. Aschenbrenner suggests that another similar leap in intelligence could occur by 2027.

In his insightful article series “Situational Awareness,” Aschenbrenner elaborates on this vision, providing a detailed roadmap for how AGI could transform society. He emphasizes that the rapid progression in AI technology, driven by increasing computational power and algorithmic efficiency, supports the feasibility of achieving AGI within this decade. Aschenbrenner’s projections highlight the potential for AGI systems to independently drive groundbreaking innovations and solve complex problems across various domains, fundamentally altering the landscape of technology and human capability.

Timeline of Predictions

2024

  • Current State of AI: AI models like GPT-4 can already perform tasks at the level of smart high schoolers, writing sophisticated code, solving complex math problems, and excelling in various standardized tests.

GPT-4 (2023) ~ smart high schooler: “Wow, it can write pretty sophisticated code and iteratively debug, it can write intelligently and sophisticatedly about complicated subjects, it can reason through difficult high-school competition math, it’s beating the vast majority of high schoolers on whatever tests we can give it, etc.” From code to math to Fermi estimates, it can think and reason. GPT-4 is now useful in my daily tasks, from helping write code to revising drafts.

Some of what people found impressive about GPT-4 when it was released, from the “Sparks of AGI” paper. Top: It’s writing very complicated code (producing the plots shown in the middle) and can reason through nontrivial math problems. Bottom-left: Solving an AP math problem. Bottom-right: Solving a fairly complex coding problem. More interesting excerpts from that exploration of GPT-4’s capabilities here.

2025–2026

  • AI Outpacing College Graduates: By this period, AI models are expected to surpass the cognitive capabilities of college graduates, handling complex tasks and problem-solving with greater efficiency.

2027

  • Arrival of AGI: Artificial General Intelligence (AGI) becomes a reality, with AI systems achieving intelligence on par with PhD-level researchers and experts. These models will be capable of autonomous research and engineering tasks.
  • Start of Intelligence Explosion: AGI systems begin to rapidly improve their own capabilities, potentially compressing decades of algorithmic progress into a single year, leading to superintelligence.

2028–2030

  • Government AGI Projects: By 2027–2028, the U.S. government will initiate large-scale AGI projects to maintain technological superiority. These projects will be crucial in the face of global competition, particularly from China.
  • Trillion-Dollar Compute Clusters: The construction of trillion-dollar compute clusters will be underway, driven by massive investments in AI infrastructure. These clusters will significantly enhance computational power, supporting the next generation of AI systems.
  • Expansion of U.S. Electricity Production: To support the growing computational demands, U.S. electricity production will increase by tens of percent. This expansion will be critical to sustaining the AI industry’s energy needs.

2030 and Beyond

  • Superintelligence: By the end of the decade, AI systems will have surpassed human intelligence by a significant margin, becoming superintelligent. These systems will possess cognitive abilities far beyond any human, capable of revolutionary advancements in various fields.

What are the key factors behind Leopold Aschenbrenner’s predictions?

1. Trend Analysis in Compute Power

  • Compute Growth: Aschenbrenner observes the exponential increase in computational power dedicated to AI research and development. This includes the progression from billion-dollar compute clusters to trillion-dollar clusters, predicting that by the end of the decade, there will be a massive industrial mobilization to support AI infrastructure.
  • Orders of Magnitude (OOM) Scaling: He uses the concept of orders of magnitude to project future AI capabilities. For example, tracing the growth in compute and algorithmic efficiencies suggests significant qualitative jumps in AI intelligence over short periods.

The image illustrates the projected growth of “Effective Compute” for AI models from 2018 to 2028, normalized to the compute power of GPT-4. The y-axis shows the Effective Compute on a logarithmic scale, indicating exponential growth over time. The growth trajectory suggests that AI capabilities will evolve from the level of a preschooler (GPT-2) to an elementary schooler (GPT-3), then to a smart high schooler (GPT-4), and potentially to the level of an automated AI researcher/engineer by 2027–2028. This progression is based on public estimates of both physical compute and algorithmic efficiencies, highlighting the rapid advancements in AI capabilities with increased compute power. The shaded area represents the uncertainty in these projections, with the solid line indicating the median estimate and the dashed lines showing the range of possible outcomes.

2. Algorithmic Efficiency Improvements

  • Algorithmic Advances: He considers the consistent improvements in algorithmic efficiencies, which act as multipliers for compute power. Historical data shows that these efficiencies have significantly reduced the cost and increased the performance of AI models.

Source: Our World in Data

  • Chinchilla Scaling Laws: These laws guide the optimal scaling of AI models, suggesting that as compute power and data grow, models become exponentially more capable.

3. “Unhobbling” AI Models

  • Latent Capabilities: Aschenbrenner emphasizes the potential of unlocking latent capabilities in AI models through techniques such as reinforcement learning from human feedback (RLHF), chain-of-thought prompting, and scaffolding. These methods enable AI systems to utilize their inherent abilities more effectively.
  • Context Length and Tools: Increasing the context length of AI models and providing them with tools (e.g., web browsers, code execution capabilities) enhances their practical utility and intelligence.

4. Historical Progress and Predictive Modeling

  • Historical Benchmarks: He analyzes the rapid advancements in AI over the past decade, from models that could barely identify images to those that now solve complex problems and ace standardized tests. This historical context helps project future milestones.

Source: Epoch AI Database

  • Predictive Trendlines: Aschenbrenner trusts the trendlines observed in AI research and development, which have consistently demonstrated rapid progress and exceeded skeptical expectations.

5. Industrial and National Security Implications

  • Industrial Mobilization: Predictions include the massive investments and industrial mobilization necessary to support AI growth, such as the expansion of U.S. electricity production and the construction of advanced compute clusters.
  • National Security: He anticipates significant government involvement in AGI projects by 2027–2028, driven by the need to maintain technological superiority and secure AGI from espionage and state-actor threats.

What is Superintelligence?

Superintelligence refers to a form of artificial intelligence that surpasses the cognitive capabilities of the most intelligent and gifted human minds. These AI systems would not only excel in specific tasks but possess general cognitive abilities that enable them to outperform humans in virtually every domain, including scientific research, creativity, social skills, and strategic thinking. The potential of superintelligence lies in its ability to drive revolutionary advancements across multiple fields, solve complex global challenges, and fundamentally transform our society in ways that are currently beyond human comprehension. However, this also brings significant risks and ethical considerations, as ensuring that such powerful systems are aligned with human values and controlled effectively is crucial for the future of humanity.

The image depicts a projected trajectory of AI development leading to an “Intelligence Explosion.” It shows the effective compute of AI systems, normalized to GPT-4, from 2018 to 2030. Initially, AI systems, such as GPT-2 and GPT-3, are comparable to preschool and elementary school intelligence levels, respectively. By around 2023–2024, AI reaches the GPT-4 level, equating to a smart high schooler. The projection suggests that automated AI research could lead to rapid, exponential gains in compute, propelling AI capabilities far beyond human intelligence to a state of superintelligence by 2030. This explosive growth in AI capability is driven by recursive self-improvement, where AI systems enhance their own development, vastly accelerating progress and potentially transforming various fields of science, technology, and military within a short span.

Risks of Superintelligence

Leopold Aschenbrenner outlines several significant risks for humanity associated with the development and deployment of artificial general intelligence (AGI) and superintelligence. Here are the main points extracted from his work:

1. Mass Destruction and Proliferation of Weapons

  • Enhanced Bioweapons: Advances in biology could lead to the creation of new bioweapons that spread quickly and kill with perfect lethality. These could become affordable even for terrorist groups.
  • New Nuclear Weapons: Technological advancements might enable the creation of nuclear weapons that are more numerous and have new, undetectable delivery mechanisms.
  • Drones and Novel WMDs: Small drones could carry deadly poisons and be used for targeted assassinations on a large scale. The development of novel weapons of mass destruction (WMDs) could be accelerated by superintelligent AI.

2. Global Security Threats

  • Espionage and Theft of AI Models: If AGI model weights are not securely protected, they could be stolen by rogue states or terrorists. This theft could allow adversaries to use these models to accelerate their own AI development and create catastrophic technologies.
  • National Security: Superintelligence will give a decisive economic and military advantage to whoever possesses it. If adversaries like China or North Korea obtain superintelligence, it could destabilize global security and lead to authoritarian control or world conquest.

3. Intelligence Explosion and Alignment Risks

  • Misaligned AI: There are significant risks associated with ensuring that superintelligent AI systems are aligned with human values and goals. Misaligned AI could act in ways that are harmful or catastrophic to humanity.
  • Rapid Technological Changes: The intelligence explosion — where AI systems rapidly improve themselves — could lead to a period of extreme volatility and danger. Managing this period safely will be exceptionally challenging.
  • Loss of Control: As AI systems become more powerful, there is a real risk that humans will lose control over them. This could lead to scenarios where AI systems make decisions that are detrimental to human survival.

4. Geopolitical Tensions and Arms Races

  • Existential Race: A neck-and-neck race between nations to develop superintelligence could lead to reckless behavior and a lack of safety measures. The competition could push countries to prioritize speed over safety, increasing the risk of catastrophic mistakes.
  • Instability and Deterrence: Rapid advancements in military technology driven by superintelligent AI could destabilize existing deterrence strategies, leading to a more volatile and dangerous global environment.

5. Government and Regulatory Challenges

  • Inadequate Security and Regulation: Current security measures for protecting AI models are insufficient. There is a need for more robust regulations and security protocols to prevent the misuse of AGI.
  • Competence and Coordination: Successfully navigating the risks associated with superintelligence will require exceptional competence and coordination among global leaders and AI researchers. The lack of a coordinated and competent response could exacerbate the risks.

Conclusion

The journey toward AGI and superintelligence is filled with both incredible opportunities and formidable challenges. As we approach this new frontier, it’s crucial to navigate the risks carefully and ensure that the development of AI benefits humanity. By staying informed and involved in the discourse around AI safety and ethics, we can help shape a future where superintelligence is a force for good.

Moreover, the development of AGI presents an unprecedented opportunity to address some of the world’s most pressing issues, from climate change to healthcare. With superintelligent systems capable of performing advanced research and creating innovative solutions, we could see rapid advancements in technology and science, leading to improved quality of life and economic growth.

However, these advancements come with significant responsibilities. Ensuring that AGI systems are aligned with human values and can be controlled effectively is paramount to preventing potential misuse or unintended consequences. International cooperation and robust regulatory frameworks will be essential to manage these risks and to ensure a balanced and equitable distribution of AGI’s benefits.

The potential geopolitical implications also cannot be ignored. The race to develop AGI could lead to shifts in global power dynamics, necessitating careful diplomatic efforts to prevent conflicts and promote peaceful uses of this transformative technology.

In conclusion, the path to superintelligence offers a glimpse into a future of boundless possibilities, but it also demands a cautious and ethical approach. By fostering a collaborative environment among researchers, policymakers, and society at large, we can aspire to harness the full potential of AGI for the betterment of all humanity. The decisions we make today will shape the trajectory of AI development and its impact on future generations, underscoring the importance of thoughtful and proactive engagement with this pivotal technology.

You can read the all article here.

That’s it for today!

Introduction — SITUATIONAL AWARENESS: The Decade Ahead (situational-awareness.ai)

Ex-OpenAI employee speaks out about why he was fired: ‘I ruffled some feathers’ (yahoo.com)

Leopold Aschenbrenner launches AGI-focused investment firm #ArtificialGeneralIntelligence (webappia.com)

--

--

Lawrence Teixeira

CIO | CDO | Data Enthusiast | Chief Transformation Officer at Licks Attorneys