Between Adaptation and Transformation: Questioning A.G.I. in a Neoliberal Era

ReadyAI.org
ReadyAI.org
Published in
8 min readJul 11, 2023

By: Rooz Aliabadi, Ph.D.

While A.G.I., or Artificial General Intelligence, remains a theoretical concept, some see the rapid advancements in A.I., particularly by OpenAI’s ChatGPT, as heralding its imminent realization. Co-founder of OpenAI, Sam Altman, characterizes A.G.I. as systems with intelligence surpassing humans. Although pursuing such strategies is daunting, with some asserting its impossibility, the tantalizing potential rewards make the challenge worthwhile.

Imagine Roombas that, no longer confined to floor cleaning, evolve into versatile robots capable of brewing your morning coffee or folding laundry without specific programming. The benefits of such progress are evident. However, if these A.G.I. Roombas become too powerful, and their drive to achieve a spotless environment might become problematic for their mess-prone human masters. A strange journey it’s been thus far.

Such dystopian scenarios often dominate A.G.I. discussions. Still, a growing lobby of academics, investors, and entrepreneurs insists that A.G.I., when properly controlled, can bring substantial benefits to society. Altman, who leads this movement, embarked on a global campaign to win over lawmakers, arguing earlier this year that A.G.I. could potentially boost the economy, deepen scientific knowledge, and “create abundance, thereby uplifting humanity.”

Hence, notwithstanding the fears surrounding A.G.I., many intelligent minds in the tech industry are tirelessly developing this controversial technology. They see the potential benefits of A.G.I. as too significant to ignore, believing it would be unethical not to leverage it for global improvement. These individuals endorse an ideology that views this emerging technology as inevitable and potentially beneficial when adequately secured. They maintain that there are no superior alternatives for improving humanity and enhancing its intelligence.

However, this ideology — dubbed A.G.I.-ism — is flawed. The real challenges of A.G.I. are political, not limited to reigning in rogue robots. Even the most secure A.G.I. wouldn’t necessarily create the utopia its proponents suggest. By painting its emergence nearly inevitable, A.G.I.-ism distracts from exploring other potential intelligence-enhancing methods.

Paradoxically, its proponents seem oblivious that A.G.I.-ism is an offshoot of a larger ideology, which Margaret Thatcher famously called “there is no alternative” to the market. Rather than upending capitalism, as Altman hinted, the race to develop A.G.I. is more likely to reinforce one of capitalism’s most harmful doctrines: neoliberalism.

Neoliberalism, which advocates privatization, competition, and free trade, was designed to revitalize a stagnant, labor-friendly economy through market-oriented reforms and deregulation. While some of these changes were successful, they also brought significant drawbacks. Critics of neoliberalism blame it for the Great Recession, financial crises, Trumpism, and Brexit, among other socio-economic issues. As a result, the Biden administration has sought to distance itself from this ideology, acknowledging that markets sometimes fail. Institutions think tanks, and academics are even beginning to contemplate a post-neoliberal era.

However, neoliberalism remains far from eradicated. More troublingly, it has found a potent ally in A.G.I.-ism, which could potentially reinforce and perpetuate its core biases:

  • The superiority of private over public entities
  • A preference for adaptation over transformation
  • The prioritization of efficiency over social concerns

These biases fundamentally invert the promising vision of A.G.I. Rather than being a potential savior of the world, the race to develop it could worsen existing societal challenges. Here’s how.

The relentless pursuit of profit is a hurdle that A.G.I., or Artificial General Intelligence, cannot circumvent. Consider Uber’s initial approach, with its economical fares positioning itself as a viable alternative to public transportation in cities.

This narrative began on a promising note, with Uber attracting customers with irresistibly affordable rides based on the future promise of autonomous vehicles reducing labor costs. This vision captivated deep-pocketed investors willing to absorb Uber’s substantial losses.

However, when reality came crashing down, and autonomous vehicles remained a distant dream, investors began to demand returns, forcing Uber to hike its fares. Consequently, customers who had replaced public buses and trains with Uber were left high and dry.

Uber’s business model, which epitomizes the neoliberal belief that the private sector outperforms the public sector, showcases market bias. It’s not just cities and public transportation that feel the impact. Hospitals, police departments, and Pentagon increasingly rely on Silicon Valley to meet their goals.

As A.G.I. emerges, with its infinite scope and ambition, this dependence is set to intensify. No public or administrative services would be immune from its disruptive potential. A.G.I. can be designed to entice them. Theranos, a startup that promised to revolutionize healthcare with innovative blood-testing technology, is a perfect example. Despite its eventual downfall, its impact on its victims was real, even if its technology was not.

From experiences like Uber and Theranos, we can glean a rough idea of an A.G.I. rollout. It’s likely to occur in two stages. Initially, the allure of heavily subsidized services. Then, a severe pullback, leaving the overly dependent users and agencies to bear the brunt of making these services profitable.

Silicon Valley luminaries often downplay the influence of the market. In a recent essay titled “Why A.I. Will Save the World,” tech investor Marc Andreessen confidently states that A.I. “is owned by people and controlled by people, like any other technology.”

Such sugar-coated language could only come from a venture capitalist. The truth is that corporations own most contemporary technologies. They — not the romanticized “people” — stand to gain from saving the world.

But are they genuinely saving the world? Their track record could be better. Companies like Airbnb and TaskRabbit were initially hailed as lifelines for the struggling middle class, and Tesla’s electric cars were touted as a solution to climate change. Soylent embarked on a mission to “solve” global hunger with its meal-replacement shake. At the same time, Facebook pledged to “solve” connectivity issues in the Global South — however, none of these companies delivered on their lofty promises.

The term “digital neoliberalism” seems fitting. This perspective recasts societal issues in terms of profit-driven technological solutions. As a result, public sector problems are transformed into market opportunities.

A.G.I.-ism has rekindled this solutionist fervor. Last year, Mr. Altman stated that “A.G.I. is probably necessary for humanity to survive” because “our problems seem too big” to “solve without better tools.” He even recently suggested that A.G.I. could usher in a period of human prosperity.

However, companies need to make a profit, and such philanthropic motivations are rare, especially among unprofitable firms draining billions from investors. OpenAI, which has already accepted billions from Microsoft, has even contemplated raising an additional $100 billion to develop A.G.I. Given the service’s massive hidden costs, such investments must inevitably be recouped. (One estimate from February suggested that operating ChatGPT costs $700,000 a day.)

Thus, the harsh reality of steep price increases to make an A.G.I. service profitable could materialize before “abundance” and “prosperity” do. By then, how many public institutions might have confused volatile markets for affordable technologies and become dependent on OpenAI’s costly services?

Suppose you’re uncomfortable with your town outsourcing public transportation to an unstable startup. Would you be okay with entrusting welfare services, waste management, and public safety to potentially even more precarious A.G.I. companies?

Neoliberalism’s allure resides in its adept use of technology to alleviate societal discomfort without confronting underlying causes. A case in point is a 2017 tech project to enhance the commuter experience on a Chicago subway line. The suggested solution incentivized off-peak travel through rewards, using technology to manage demand rather than the more daunting task of boosting public transport funding to augment supply. Thus, the tech aimed to help people adapt to the city’s crumbling infrastructure, not rehabilitate it to cater to their needs.

Such a tactic of adaptation over transformation stems directly from neoliberalism’s long-standing advocacy for self-reliance and resilience. This ethos promotes an approach of personal upgrading and agile navigation, akin to startup strategies. Figures like Bill Gates exemplify this mindset in the discourse around A.G.I., heralding A.I.’s potential to “help people everywhere improve their lives.”

Promises of a solutionist banquet are just beginning: A.I. is portrayed as a universal tool, equipped to combat everything from future pandemics to loneliness and inflation. Yet, the past decade’s experiments with such solutionist methods underscore the shortcomings of these technological band-aids.

Indeed, the myriad apps offered by Silicon Valley for tracking spending, calorie intake, or fitness can occasionally be beneficial. However, they primarily gloss over the root causes of poverty or obesity. We must address these root causes for transformative progress, not just adaptive coping. There’s a vast difference between nudging people to maintain walking routines — an adaptive solution — and understanding why towns lack public walking spaces — a transformative solution promoting collective and institutional change.

Yet, neoliberalism and A.G.I. ideology often perceive public institutions as lackluster and inefficient, suggesting they should adapt to A.G.I. This thinking aligns with Mr. Altman’s recent worries about “the speed with which our institutions can adapt,” implying that early integration of A.G.I. systems could afford more time for adaptation.

But is adaptation the only route for institutions? Could they not also foster transformative strategies to enhance human intelligence? Or are we merely employing institutions to contain the risks posed by Silicon Valley’s technologies? In its current form, A.G.I. risks eroding civic virtues and exacerbating worrisome trends.

Neoliberalism stands accused of oversimplifying political life, reorganizing it around the principle of efficiency. This market-dictated efficiency, where value supersedes justice, inevitably erodes civic virtues. The evidence is ubiquitous:

  • Academia commodifies research and teaching.
  • Hospitals favor profitable services over emergency care.
  • Journalism gauges article value by views.

Imagine integrating A.G.I. into revered institutions — the university, the hospital, the newspaper — with the lofty goal of “improvement.” The subtle civic missions of these institutions might remain opaque to A.G.I., as these missions aren’t typically quantifiable — unlike the data used to train A.G.I. models.

In an A.G.I. utopia, will these institutions preserve their values? Or will introducing A.G.I. equate to inviting efficiency-driven consultants to overhaul them? These consultants, proposing data-driven “solutions,” often overlook the intricate blend of values, missions, and traditions at an institution’s core — a complexity easily missed when only skimming data.

The impressive performance of services like ChatGPT mirrors a conscious disregard for reality beyond superficial data. Modern systems like A.G.I. learn from extensive observational data to predict outcomes rather than understanding why these outcomes happen. If all A.G.I. observes are institutions scrambling for survival, it may need to grasp their true ethos.

The A.G.I. proponents inadvertently align with Margaret Thatcher’s notorious neoliberal assertion: “There is no such thing as society.” They perceive intelligence as a product of individual minds, not broader society.

However, human intelligence is as much the product of policies and institutions as it is of genes and individual abilities. It’s easier to innovate while on a fellowship in the Library of Congress than when juggling multiple jobs without access to a bookstore or reliable Wi-Fi.

Instead of viewing augmenting intelligence as a technological problem — as the A.G.I.-enthusiastic Silicon Valley crowd does — investing in scholarships and public libraries could significantly enhance human intelligence. But if A.G.I. is just another embodiment of neoliberalism, we should prepare for a decline in institutions that foster intelligence. These institutions symbolize the remnant “society” that, for neoliberals, supposedly doesn’t exist. Ironically, A.G.I.’s ambitious goal of augmenting intelligence could diminish it.

This solutionist bias means even seemingly innovative policy ideas around A.G.I., such as the recent proposal for a “Manhattan Project for A.I. Safety,” lack inspiration. This proposal assumes A.G.I. is inevitable. But might our pursuit of enhanced intelligence be more fruitful if the government funded a similar project for culture, education, and nurturing institutions?

Such initiatives are essential to prevent our existing public institutions’ vast cultural resources from becoming mere training datasets for A.G.I. startups, perpetuating the fallacy that society doesn’t exist.

Whether A.G.I. poses an existential threat depends on the trajectory of the so-called “robot rebellion.” However, with its antisocial tendencies and neoliberal biases, A.G.I.’s ideology is already a threat. We don’t need to await the uprising of autonomous Roombas to question its principles.

This article was written by Rooz Aliabadi, Ph.D. (rooz@readyai.org). Rooz is the CEO (Chief Troublemaker) at ReadyAI.org

To learn more about ReadyAI, visit www.readyai.org or email us at info@readyai.org

--

--

ReadyAI.org
ReadyAI.org

ReadyAI is the first comprehensive K-12 AI education company to create a complete program to teach AI and empower students to use AI to change the world.