AI maximalists and the danger of social Darwinism

Joan Westenberg
@westenberg
Published in
5 min readJun 10, 2024

“Embrace AI or be left behind.”

It’s a slogan we’re hearing more and more, said with the conviction of a religious proclamation.

It has become the mantra of the artificial intelligence revolution.

Futurists, tech moguls, and AI acolytes preach the gospel of AI supremacy with evangelical zeal; dissenters are dismissed as Luddites, clinging desperately to an obsolete past as the tides of progress threaten to sweep them away.

“Embrace AI or be left behind” may turn out to be an accurate assessment of our technological reality. I’m not here to deny that.

But it’s a condescending and heartless idea rooted in long-debunked notions of social Darwinism. It threatens to subjugate humanity to the whims of an amoral algorithm, as it waves away the immense perils and dislocations that await us in an AI-dominated future.

Jaron Lanier, the polymathic virtual reality pioneer, described the false dichotomy at the heart of the AI ultimatum. “The most important thing about a technology is how it changes people,” Lanier observes in his 2010 manifesto You Are Not a Gadget. Lanier’s critical insight is that there is nothing predetermined or inevitable about the path of technological progress. Technology is a human construct, and its trajectory is shaped by its creators’ philosophical values and economic incentives. “Software,” he reminds us, “is not neutral.”

AI supremacists would have us believe that the rise of artificial intelligence is a kind of autonomous, almost supernatural force – one which humanity has no choice but to prostrate itself before. “Adapt to AI or become obsolete.” But who exactly will become obsolete in this brave new world? Not the tech billionaires who are pouring billions into AI research and development. Not the computer scientists and engineers building ever-more sophisticated neural networks. Not the hedge funds and corporations salivating at the chance to automate away millions of jobs.

No, as usual, the most vulnerable members of society are being set up to become “obsolete.” The low-wage workers whose jobs are at the most significant risk of AI displacement. The self-driving cars and robotic arms may soon replace the Uber drivers and warehouse pickers. The radiologists and paralegals whose hard-earned expertise is already being encroached upon by deep learning algorithms. In industries from fast food to finance, AI is being deployed to boost corporate profits at the expense of human livelihoods.

At its core, the AI ultimatum is steeped in social Darwinist ideology – the pseudo-scientific notion, popularized in the 19th century by Herbert Spencer, that human societies are governed by a “survival of the fittest” evolutionary logic. Beguiled by the elegant simplicity of Darwinian theory, Spencer and his acolytes sought to apply the concept of “natural selection” to human affairs. They argued that Victorian England’s economic and social hierarchies were not arbitrary constructs but rather the products of an inevitable evolutionary process that ensured the “unfit” were culled from the human gene pool. According to social Darwinists, the untrammelled competition between individuals drives progress, and any attempt to protect the weak from the depredations of the strong is a dangerous and misguided interference with the natural order.

This thinking, long discredited by mainstream science, has made an insidious comeback among certain tech elites. Dressed up in the fashionable jargon of “disruption” and “creative destruction,” Social Darwinism 2.0 is being peddled as an unassailable truth by Tech’s thought leaders. The old mantra of “move fast and break things” effortlessly segues into the new imperative to “embrace AI or be left behind.” The weak – whether businesses, workers, or entire societies – deserve to fail. The strong will inevitably prevail.

The “embrace AI or else” ethos conveniently ignores one crucial fact: artificial intelligence is a product of human choices and values, not some divine force of nature. We are not passive bystanders in the AI revolution, helplessly swept along by the currents of technological change. We have the power to shape the development and deployment of AI systems by our deepest-held principles and aspirations. We can create an AI future that empowers rather than subjugates humanity and creates shared prosperity rather than entrenched inequality.

Computer scientist and AI ethics pioneer Joanna Bryson argued that the key to building beneficial AI systems is to imbue them with human values from the ground up. “We are building machines that have power in our society. We can build them differently,” she says. By baking principles like fairness, accountability and transparency into the very architecture of AI, we can create systems that augment rather than replace human capabilities. An AI-powered future that works for everyone, not just the privileged few.

This kind of “human-compatible AI,” to borrow a term from UC Berkeley computer scientist Stuart Russell, will not happen independently. It requires a massive mobilization of political will and civic engagement to counteract the laissez-faire social Darwinism that animates so much of current AI rhetoric and policy. It requires a steadfast commitment to democratic oversight and control over robust AI systems and the unaccountable tech corporations developing them.

It requires us to declare with moral clarity that we will not meekly submit to technological determinism or the diktats of self-anointed AI overlords. That we will not reshape our societies and surrender our humanity to better conform to the needs of artificially intelligent machines. That we will muster the wisdom and the courage to put technology in service of human flourishing rather than vice versa.

Sixty years ago, at the dawn of the computer revolution, the mathematician Norbert Wiener issued a prophetic warning about the perils of ceding too much power to our machine creations. “We can be humble and live a good life with the aid of machines,” he wrote, “or we can be arrogant and die.” The sloganeering of AI supremacy – the false imperative to embrace a predetermined technological future or perish – is the voice of that arrogance.

Join thousands of other readers, creators and thinkers who subscribe to @Westenberg — where I write about tech, humans and philosophy.

--

--