We are scared of the question Chat-GPT cannot answer. Because the answer is too obvious.

Jascha Bareis
9 min readMar 31, 2023

--

Large language models embody the current AI hype. To the problems we face as society, Silicon Valley answers with ever more technological complexity. And we all love it, because the reality is just too unpleasant.

Getting too close to the sky. The Fall of Icarus, Carlo Saraceni 1579–1620.

Things will never be the same”, “Future Shock”, “Deeply Unsettled” ,“A New Era: The Age of AI has begun”, “The new World power”, “One of the most powerful tools in human history, “Should we risk loss of control of our civilization?”.

Little surprise. These are just some of the (social-)media headlines praising and panicking about current achievements of so-called large language models (LLMs). How often have we heard these outcries concerning AI in the last decades? Every time AI enthusiasm has been shaken by a subsequent AI winter, but now again tech-apologists proclaim the next mesmerizing AI wave: The emergence of LLMs like OpenAI’s GPT-3 (now 4, tomorrow 5), or Google’s Bard have once more electrified public- and social media across the globe.

And what hasn’t been written in the last weeks? Forget about high-school language teachers, university writing assignments, or math exercises. Now students will find ways to cheat anyway, so tutors and teachers better speed up to restructure courses drastically. LLMs deliberately lie and can manipulate emotions. You have a humanities degree and now freak out about LLMs taking your white-collar job? Just rescue your career as a professional prompt writer. And of course, proclaimed apocalyptic threats are, as usual with AI, the cherry on top of the AI hype. A New York Times columnist being creeped out while testing Bing’s Chat bot, as it was fantasizing about malign ways to achieve world domination, just too (odd!) subsequently declare its love to him, and tried to talk the journalist into a divorce with his wife. And beware: The menacing competition is on the rise as the AI race is being declared between companies and countries, featuring the typical rhetoric of catching-up, or defending-the-pole position.

Big Tech depoliticizes political concepts

To be clear right from the start: This essay does not address technology. It addresses Silicon Valley’s vision of technology and innovation, whose discourse is being cherished by media and politicians alike. Yes, the results of LLMs are impressive and they will change the way we work. But the attention the topic receives and the discourse it creates (as usual with AI in media) is also impressively frustrating. Yes, LLMs need drastic regulation but to just focus on ex-post regulation is falling short. It is also time to escape the well known script the Silicon Valley writes every time they push something into the market and proclaim it will shake the social world “fundamentally”.

I am working in the field of technology assessment where I have to mediate between academia, politics and the public in order to advice stakeholders concerning regulation and vision assessment. In my life as a researcher, colleagues and I have been emphasizing the dangers of overpromising AI, deconstructing its underlying rhetoric, and the very problematic metaphors and imagery that are invoked to talk AI into being.

And I am getting tired of it. Moreover, I do think it is rhetorically blatant to borrow from metaphors that imply deep social change to hail the release of a chat bot. Revolution? New world power? The end of civilization? A pretty elitist and detached Silicon Valley perspective on the world, I would say. Or how Tante puts it: “It’s wishful worries of a group of people who read way too much science fiction and way too little about the political economy and structures of reality!”. Well, it works, because panicking creates attention. Something Lee Vinsel has recently described as “criti-hype”, which Alfred Nordmann had already critiqued in 2007 as “speculative ethics”.

Presumably every person starving of hunger, being shelled by bombs while going to school, worrying about right-wing populism, or migrating because of a climate change disaster in her village right now, would have preferred to have the tech-billions invested another way.

LLMs and AI in general should be good for humans. That is how it is proclaimed by en vogue publications of ethical AI guidelines by Big Tech and politics. For example, human-centered AI is a very popular approach, which the EU commission embraces, but which also remains utterly ill-defined. Instead, the concept of “human-centered” is nurtured with more empty signifiers in the debate like “trustworthy” or “responsible” AI, just to fill this one up with ethical but not-defined-buzzwords like “accountability”, “fairness”, “justice”. I have been studying this practice with colleagues in scrutiny. It is a process which torments every studied political theorist (including myself), as this “concept-washing” simply negates the theoretical complexity and richness of these concepts. Worse, it is also offending against the people who fought and even died for them (and still do), struggling for a life without racism, exploitation and misogyny. And now Big Tech adorns itself with these nicely sounding ethical buzzwords, doing “ethics washing” without providing real solutions and taking responsibility for algorithmic bias, societal disinformation or ecological footprint.

I do really think we are missing the greater picture with the new AI hype wave caused by LLMs. Actually, I am worried, to say the least, as once more we create even more problems through complex tech-solutions without looking at the obvious answers. Or maybe we just do not have the courage to address the unspoken, discomforting truth: That tech will not save the planet, even if the Silicon Valley, Tech Gurus like Elon Musk, want-to-be philanthropists like Bill Gates, and many tech- and enhancement enthusiasts from the elitist Oxford & Stanford orbit want to make us think that way. They get too much media attention. The people who present alternative societal futures get too little. No wonder, Silicon Valley has created the machines and apps that perfected the attention economy.

Techies love the complexity-game

Interestingly, the current AI wave focusing on LLMs suggests that the solution to many problems of our times is to give the correct answer to a Gordian task in no time, or code a better model. As if solving humanity’s problems equals a game where we have to master complexity. Techies seem to love complexity challenges in the look of a competitive game. Here, complexity is depicted as an essentially exclusive elite realm only the special, gifted, white males know how to handle. No wonder AI breakthroughs have historically been proclaimed with the Turing test (which Turing actually called the “imitation game”), chess, Jeopardy!, GO, and now with a chatbot passing all kinds of exams it was not trained for. If we humans lose in this competitive game it is easy to then praise the new machine god, giving Silicon Valley the opportunity to legitimize its “disruptive” tech-vision of society.

And there is this strange obsession to do it better than the human. It is somehow a very weird comparison that dates back to the founding days of modern AI in the 50s and is still taken for granted today with the Silicon Valley ideology: If humans create a machine which does something better than humans, this will automatically help humanity! Outsource human labor to AI for the better of humanity! A hypothesis that still waits for confirmation, if we look at the big picture. Who really benefits from it, given the current societal structures? Does the solution always lie in more productivity?

I think it is a pretty safe bet to say that LLMs will not solve anything fundamental that justifies the term “revolutionary. Why do I dare to say this? Because knowing answers to some complex riddles or tasks is not what we need. It is actually a symptom of political failure to come up with ever more complex and costly solutions to longstanding societal problems. It is truly absurd on how many more complex technological try-outs we shall rely on to fight problems like poverty, geopolitical conflict, rapid urbanization or, yes, if we had paid attention carefully in the 70’s, climate change.

Let’s be frank. We know that our way of living and ordering society is not doing good to many other fellow humans and non-humans in the world, not to mention the environment. Take our way of consumption, how we produce fancy goods and externalize their hazardous effects (e.g. digging minerals for batteries crucial to our “e-revolution” and our AI infrastructure which deplete the environment and humans in Africa. To then be so kind as to ship them our E-waste, too).

Worldwide electronic waste. mil tons/year. We recycle only 17% of it. Source: ARTE: Mit offenen Karten. https://www.youtube.com/watch?v=0jty3HIR6HY

Our answer to societal problems: ever-more complexity!

The problem is that we are too comfortable to change it, because reality is at times utterly unpleasant. The issue with the hype around LLMs is that they suggest that the answer to our problems is to make processes more efficient, quicker and cheaper ignoring their collateral societal and environmental damages. We are so indulged in the tech-solutionist rhetoric that we implicitly take a Silicon Valley mindset for granted.

Energy problems? Smart grids! Mobility problem? Global warming? Just electrify and automate cars! Urbanization problems? Smart cities! (And if things don’t work out, let’s just go to Mars — if your paycheck allows it).

Self-announced billionaire philanthropists like Bill Gates scare me when they proclaim that current AI LLMs are the key technology to reduce health inequality and global warming, and praise the future where everybody will have a digital personal assistant. But no personal assistant can help us to cope with the very harsh reality: what we need is “analog” political action.

Is it really the “what-is-to-be-done?”, or “why-don’t-we-do-it?” we should care about? Surely, it would get unpleasant to some of us, who are in a position of privilege. Just because it would really change something. Guess what, normativity cannot be tackled by LLMs. It is not a riddle. You cannot solve it by working through more data. It is a belief system you have to justify on good moral grounds. And ultimately you have to choose where you stand. Silicon Valley chose to blind us with tech solutions to big problems they define and then call fundamental.

Honestly, how many billions are invested in AI, how many smart brains work on it, instead of investing the money and brain power into functioning public transport with “unsexy” bikes and trams, a welfare state, anti-corruption policy, political education, or higher wages for nursery teachers? And by the way: How about paying taxes, dear billionaires and BigTech companies? Instead, we ever more count on expensive and complex technology which runs on heavily centralized and resource intense infrastructure controlled by private tech giants.

“Efficiency”, “innovation”, “personalization”, “mental health” (?!)… Chat-GPT’s modernization narrative.

Ultimately, it is societal decadence. It is a fetish game we do not have the money, resources, and time for, facing the problems we have in the world. The SPIEGEL recently interviewed US anthropologist Joseph Tainter who wrote the well regarded book The Collapse of Complex Societies in 1988. The interview asks about parallels between the downfall of empires throughout history. Looking at current society he answered:

“There are several parallels. Like the advanced civilizations of that time, we overexploit natural resources and try to solve challenging problems with technological advances. What all collapsed advanced civilizations have in common: They are developing ever more complex solutions to their problems. This entails high costs, while the benefit they derive from it is becoming smaller and smaller.”

This exactly is the modernist Silicon Valley ship of progress we have tacitly agreed to embark on. The marriage of tech-optimism and capitalist-progress story has been going on for decades already. You do not have to be a technophobe luddite or catastrophic pessimist to feel that this narrative is pretty outdated. I am worried that we think we can buy ourselves a better future by keep on surfing the techy-progress wave, without having the guts to really bite the bullet confronting “analog” reality. Because, yes, political reality bites.

Thank’s to Nevena Nikolajević, Luca Hemmerich, Victoria Guijarro Santos and Max Roßmann, who improved the draft version substantially with their critical comments. And to Reinhard Heil providing the DIgIT “IKT News”.

--

--

Jascha Bareis

Researcher and Political Analyst on Public & Military AI @ITAS Karlsruhe and @HIIG Berlin