Superintelligence and Beyond: Aschenbrenner’s Predictions vs. Hossenfelder’s Pragmatism

Rohan Roberts
AIDEN Global
Published in
5 min readJul 21, 2024

Leopold Aschenbrenner is a San Francisco-based AI researcher who previously worked at OpenAI. He has an academic background that includes economics research at Oxford University’s Global Priorities Institute. Aschenbrenner is known for his predictions and writings on artificial intelligence, particularly concerning the rapid development and future impact of Artificial General Intelligence (AGI). He recently founded an investment firm focused on AGI, leveraging his insights and experience in the field. His prominent work, Situational Awareness: The Decade Ahead, outlines his views on the imminent advancements in AI and their potential societal implications.

“Situational Awareness” is more than just an analysis of the future of AI; it’s a wake-up call to the immense changes that lie ahead. As we approach the middle of the decade, Aschenbrenner’s insights provide a clear-eyed view of how AI advancements are set to reshape our world.

Over the past year, the conversation in tech hubs like San Francisco has shifted dramatically. We’re no longer talking about billion-dollar compute clusters; now, it’s all about trillion-dollar investments. Every six months, the scale of ambition increases, driven by the relentless push to secure power contracts and transform the industrial landscape of America. By the end of this decade, we expect a dramatic increase in American electricity production, fuelled by a booming GPU infrastructure from Pennsylvania’s shale fields to Nevada’s solar farms.

This all-out race towards Artificial General Intelligence (AGI) is not just a technological marvel; it’s a strategic necessity. Aschenbrenner predicts that by 2025/26, machines will outperform college graduates, and by 2027, they will surpass human intelligence altogether. This rapid progression, moving from GPT-4 to AGI, is powered by consistent improvements in computational power, algorithmic efficiencies, and the unlocking of latent capabilities in AI models.

The potential of AGI brings both unprecedented opportunities and significant challenges. Aschenbrenner’s extended essay discusses how superintelligent systems could revolutionise industries by automating AI research, compressing decades of progress into mere years. However, it also warns of the national security risks, with AI secrets potentially falling into the hands of state actors like the Chinese Communist Party (CCP).

One of the most critical challenges is the need for super alignment — ensuring that AI systems, much smarter than humans, remain under our control. Failure to manage this could lead to catastrophic outcomes. Furthermore, the geopolitical stakes are high; maintaining technological preeminence over authoritarian regimes is crucial for the survival of the free world.

Aschenbrenner’s analysis doesn’t just stop at the technological and strategic implications; it dives into the human element, noting that only a few hundred people, mostly in San Francisco, truly grasp the seismic shifts underway. These individuals, who once faced scepticism, now lead the charge in AI development, drawing parallels to historical figures like Szilard and Oppenheimer.

“Situational Awareness” underscores that the future of AI is not a distant prospect but an imminent reality. The race to superintelligence is on, and its outcome will shape the fabric of our society. As we stand on the brink of this transformative era, one must ponder: Are we prepared for a world where machines not only assist but surpass human intelligence? The choices we make now will determine whether we harness this power for collective good or fall victim to the perils of unchecked advancement.

Leopold Aschenbrenner’s predictions about AI’s rapid ascent to superintelligence have sparked significant discussion, not least from renowned physicist and science communicator Sabine Hossenfelder. In her Nautilus article titled A Reality Check on Superhuman AI, she has penned a detailed critique of Aschenbrenner’s essay and provides a measured counterpoint to his optimistic projections.

Hossenfelder diverges significantly on the notion of an imminent “intelligence explosion.” She highlights two major limiting factors: energy and data. Aschenbrenner’s vision involves AI models running on massive energy supplies, requiring substantial new infrastructure, which she argues is an unrealistic and environmentally unsustainable approach. The vast amounts of data required for training advanced AI models are another hurdle. Much of the readily available data has already been utilised, and collecting new, high-quality data is a daunting task.

Hossenfelder critiques the notion of self-replicating robotic factories as overly simplistic and disconnected from real-world constraints. She points out that creating such advanced infrastructures would necessitate a complete overhaul of the global economy and logistics, a process that would take decades, not mere years.

She also highlights significant challenges in training AI with new data and incorporating common sense knowledge. She argues that current AI models have already utilised much of the readily available online data, and gathering new, high-quality data is increasingly difficult. Additionally, AI systems lack access to “tacit knowledge” — the intuitive, experiential understanding that humans possess, which cannot be easily extracted from existing datasets. Hossenfelder points out that without these crucial data inputs, even advanced algorithms may struggle to achieve meaningful improvements. This limitation underscores the gap between theoretical AI capabilities and practical, real-world applications, emphasising the complexities involved in training AI to possess true common sense and comprehensive understanding.

Despite her scepticism, Hossenfelder concedes that AGI could unlock significant progress in science and technology by leveraging its ability to process and understand vast amounts of scientific literature, potentially leading to breakthroughs in medicine, physics, and other fields. However, she stresses that the societal and security implications of AGI are gravely underestimated. She foresees governments inevitably stepping in to regulate and control AI development, which could lead to nationalisation of AI resources and severe limitations on their use.

Reflecting on historical predictions of AI advancements, Hossenfelder notes a recurring trend of overestimation among frontier researchers. From Herbert Simon to Marvin Minsky, past projections have consistently misjudged the pace of technological progress. This trend suggests that while the potential for superintelligence is exciting, its realisation might be further away than Aschenbrenner anticipates.

Hossenfelder leaves us with a thought-provoking question: Are we truly prepared for the societal upheaval and ethical challenges that superintelligent AI will bring, even if it arrives later than predicted? The urgency, she implies, lies not just in achieving these technological feats, but in ensuring that we manage them responsibly and sustainably.

--

--

Rohan Roberts
AIDEN Global

Director, SciFest Dubai | Director of Innovation and Future Learning, GEMS Education | www.rohanroberts.com