The Stop Button Society

Longing for the end of capitalism?

Wim Naudé
5 min readMar 8, 2023
Source: Image by Gerd Altmann from Pixabay

The West has become a stop button society. In such a society, the knee-jerk response when facing a threat, or even a perceived threat, is to push the stop button. In facing the climate crisis, the degrowth movement wants to press the stop button on economic growth in the West. Facing a possible future artificial superintelligence, many people want to push the stop button on Artificial Intelligence (AI). So, in the views of many, economic growth and AI have become runaway trains heading towards the precipice — and they want to pull on the emergency brake. Books, movies, and Netflix series dealing with the apocalypse and a post-apocalyptic world are widely popular.

Of course, pushing the stop button is the correct thing to do when steering towards an abyss. But is economic growth in the West, and AI, in need of stopping? And if not, can stopping them perhaps cause even more significant harm? Are we mistaking the brake for the accelerator? And why has the West become so risk-averse?

The growth stop-button

First, stopping anthropogenic climate change is necessary. It poses an immense risk. But wanting to do so by pressing the stop button on economic growth in the West will not work. Reducing the West’s GDP will not reduce carbon emissions enough to limit global warming to 1,5 degrees Celsius above pre-industrial levels. Most emissions are from developing countries, whose emissions will only grow, and we would still have to decarbonize the large share of what remains of Western economies after they have followed the degrowth cult’s prescriptions.

But degrowth can be even tantamount to flooring the accelerator as far as a climate catastrophe is concerned. Most likely, “degrowth might turn out to be dirty.” Under degrowth, we will have fewer resources to invest in renewable energies and the decarbonization of the economy. Businesses may substitute more expensive cleaner production techniques for cheaper but more polluting technologies.

A recent summary of degrowth’s shortcomings concludes that

“global warming is a serious issue and claiming to respond to it with ‘solutions’ of uncertain carbon efficiency, which we do not know how to implement in practice, is irresponsible. It is all the more irresponsible because the scientific literature on degrowth seems to have stagnated for at least ten years.”

The AI stop-button

Second, stopping AI research out of fear that it poses an existential risk may likewise seem on the face of it reasonable. Still, it may also be equivalent to flooring the accelerator regarding the existential threat from technology. This is because we may need AI to revitalize the stagnating, ossifying economies of the West. AI may offer innovation in the method of innovation, helping to overcome the reduction in human ideas as populations decline and helping to overcome the burden of knowledge that hinders scientific progress. It is a General-Purpose Technology (GPT) that will alter the “playbook” of innovation and maybe, as IJ Good put it, “the last invention that man need ever make.”

Theoretically, AI may pose an existential risk. But how real is this risk, and how risk-averse should we be? Those who want to shackle AI research tend to fall for what is known as Pascal’s Mugging: the erroneous conclusion that if the probability of a future catastrophe is minuscule but existential, any action or cost now is justified to avert it. If we do not succumb to Pascal’s Mugging, the most reasonable course of action is not to press the stop button but to proceed incrementally and with sufficient caution and oversight. This is indeed the position that seems to be supported by most (two-thirds) of AI scientists.

The real risk

The real, largely neglected risk, is not excessive growth and consumption and runaway technological (AI) innovation but stagnation. Even without the intervention of the degrowth movement, economic growth, innovation, and science is slowing down in the West. At the current rate, declining population growth will result in world GDP growth falling to zero between 85 to 250 years from now. And the returns to science and innovation would continue to decline as “ideas get harder to find.”

The dangers of this scenario are that it would leave the world much more exposed and vulnerable to shocks, make the adjustment to a zero-carbon-emitting economy more costly, and raise the risk of conflict by turning the economy into a zero-sum game. As Nate Hagens envisages, it would be a “great simplification” but one where life would be “nasty, brutish and short.”

Thus, economic growth in the West, and AI research, do not need to stop, at least not soon (it will of course eventually, there is no such thing as infinite economic growth). This is not to say that we should not try to ensure that economic growth is green, decoupled, sustainable, and inclusive — driven instead by quality investments than conspicuous consumption. Or that we should not try to ensure that “awful” AI applications and outcomes are avoided through appropriate legislation and oversight — good AI governance. The train does not need an emergency stop; it must continue towards the right destination.

Boogeymen

The final question is, why has the West become a risk-averse, stop-button society? In 1995 economists Andrea Baranzi and Francois Bourguignon published a model in which they tried to answer the question of what a country should do if it faces the decision “whether to adopt or not a new technology that will raise the rate of GDP growth by some variable amount?” They evaluated the possibility that such a new technology and its consequence (growth) would increase the likelihood of humanity’s extinction. To minimize this, the rate of innovation and economic growth would have to be reduced.

They found that low technology growth with an extinction probability of zero will be optimal only if the utility of survival and risk of technological innovation is relatively large. As such, “sustainable growth is consistent with optimal growth only for affluent societies.” This means that degrowth and concerns about the existential risks of AI will primarily be rich-country concerns. As indeed, these are. It has been said that degrowth is not really about climate but more about a “crisis of meaning for the affluent”, which requires that they end capitalism.

Interestingly, the degrowthers have this in common with those who fear AI as an existential risk: they ultimately long for the end of capitalism. As writer Ted Chiang remarked, affluent westerners envision the world’s end through

a form of unchecked capitalism, disguised as a superintelligent AI. They have unconsciously created a devil in their own image, a boogeyman whose excesses are precisely their own.”

In the meantime, while the west wallows in its crisis of meaning, and struggle with its economic growth and AI boogeymen, it should not expect much sympathy from the rest of the world.

--

--

Wim Naudé

I write about technological innovation, trade and entrepreneurship, and their roles in sustainable growth and development.