The Battle is On for Control over the AI Narrative
Sam Altman says AGI is coming in 2025; Many others remain skeptical
This week, the AI hype cycle has entered a strange new stage of inflated expectations mixed with heavy skepticism. Within the span of a week, OpenAI CEO Sam Altman publicly predicted that artificial general intelligence (AGI), which matches or surpasses human capabilities across a wide range of cognitive tasks, might be coming in 2025, while three widely circulated reports from The Information, Reuters, and Bloomberg all point to a slow-down in AI development.
So, is the AI bubble about to burst, or are we about to welcome our new AI overlord?
Stakeholders Say “Brace for Impact”
During an interview for Y Combinator last week, OpenAI CEO Sam Altman claimed that AGI could be achieved in 2025, saying that it is now simply an engineering problem. He claimed things were moving faster than expected and that the path to AGI was “basically clear.”
Altman is far from alone in his optimism about achieving AGI. Last month, Anthropic CEO Dario Amodei predicted in a blog essay that “powerful AI” — essentially referring to AGI — could be achieved as early as 2026. Noted AI-Booster Elon Musk also recently said that AI would be able to do anything any human can do “within the next year or two.”
It is important to note that the concept of AGI itself is far from settled. AI researchers have long debated how to determine the criteria for reaching AGI. In a November 2023 paper, several researchers at Google DeepMind proposed a framework of five ascending levels of AI, including tiers such as “expert” and “superhuman.”
Back in June, OpenAI released the following set of five levels to track its progress toward building AGI:
If the arrival of OpenAI’s o1 reasoning model in September marked the AI development reaching Level 2, then all the AI agents being tested by major AI stakeholders since then, including Claude’s Computer Use, Google’s rumored Jarvis, and OpenAI’s upcoming AI agent codenamed “Operator,” all indicate that we’re currently striving for Level 3.
To fully achieve real AGI, however, Level 5 would need to be realized. That is when, according to OpenAI, AI models will be able to do the work of an entire organization, meaning that it is smart enough to reason, carry out tasks alone, create new ideas and implement them — combining the capabilities of all previous levels and being able to collaborate with itself.
Amodei and Altman, as industry leaders in AI development, no doubt have unique insights into where AI research is and how their test models are performing, making their predictions highly influential for shaping the future. On the other hand, they are also CEOs of startups that would clearly benefit from sustained hype (and subsequently, investor dollars) in the space.
Skeptics Say AI Has Hit a “Scaling” Wall
Parallel to the AGI optimism and accelerationism rhetoric coming from the AI leaders, there have been some high-profile reports this week pointing to the fact that AI development has hit a wall. The aforementioned trio of reports, all citing sources of insiders at major AI labs, specifically points to an issue where the traditional strategy of scaling up large language models (LLMs), which has driven industry progress from OpenAI’s initial models to the current state, is now exhibiting diminishing returns.
According to The Information, OpenAI’s latest model, Orion, has not achieved the breakthrough performance expected by Sam Altman. While Orion surpasses its predecessors in some aspects, particularly language tasks, the improvement is far less pronounced than the leap from GPT-3 to GPT-4. Internally, some employees noted that Orion struggles to outperform its predecessor in specific areas like coding. This suggests that AI development may have hit a scaling wall, where merely increasing model size and computational power no longer yields substantial advancements in AI capabilities.
Adding to the discourse, Ilya Sutskever, co-founder of OpenAI and now leading his own lab, Safe Superintelligence, acknowledged the plateauing results from scaling up pre-training methods. Speaking to Reuters, Sutskever remarked that while the 2010s were dominated by scaling, the industry may revert to “the age of wonder and discovery” in search of new approaches.
This week, a Bloomberg report further revealed that Google’s next iteration of its Gemini AI is missing internal targets. Similarly, Anthropic is facing delays in launching Claude 3.5 Opus, its anticipated model, indicating broader challenges across the industry in scaling up AI.
True to form, Sam Altman posted a simple, cryptic message on X on Thursday: “there is no wall,” seemingly rebuking the reports.
A Battle for the AI Narrative
In the days to come, there is no doubt that both camps will continue to jostle for control over the narrative surrounding AI development. Public perception is crucial for companies like OpenAI, Google, and Anthropic, which rely on a careful balance of optimism and credibility to secure funding and attract talent. Yet, as stories of plateauing model performance and missed benchmarks emerge, skepticism grows, especially among those who recall past tech bubbles.
It is obviously strategic for Sam Altman to publicly hope-dicting AGI by 2025 so as to reinforce a narrative of AI as transformative and, soon, inevitable — a narrative investors often find irresistible. Meanwhile, the broader public’s perception of AI remains a complex and evolving challenge. The rapid deployment of generative AI tools has sparked both fascination and fear, with critics warning about risks ranging from job displacement to ethical abuses — all of which could be significantly exacerbated by the arrival of AGI.
Make no mistake, the perception issue for AI is more complex than simply strategic hype-building running up against the restraints of technical capabilities. It is also deeply influenced by social and cultural factors that will take more than a year to change.
Contrary to all the accelerationism coming out of Silicon Valley, recent data from Slack’s Workforce Index suggests a notable deceleration in AI adoption among U.S. office workers. Over the past five months, the percentage of employees utilizing AI tools increased marginally from 32% to 33%, a stark contrast to the six-point surge observed earlier in the year. The reason for that? People don’t want to be seen as lazy and incompetent by using AI tools at work. AGI or not, you can’t sell a product that people would be stigmatized for using.
Another thing potentially complicating the matter is the incoming Trump administration’s presumed hands-off approach towards AI regulations. Elon Musk’s close relationship with President-elect Trump positions him as a potential key advisor on AI policy. Musk historically has warned of AGI’s potentially devastating impact and has called for safeguarding AI development. But the administration’s broader deregulatory stance could lead to a lack of oversight, allowing companies to move quickly but without adequate safeguards. This could further fuel public fears around unchecked AI development, exacerbating skepticism about the industry’s intentions.
Ultimately, the battle for the AI narrative isn’t just about whether AGI arrives by 2025 — it’s about whether the industry can convince the world it’s prepared for what comes next.