“I’m Afraid I Can’t Do That, Dave”

The AI bubble: “Anyone wanna buy some tulips?”

septentrionarius
The Cult of Stupid
Published in
7 min readMay 6, 2024

--

Source: getwallpapers.com

In a work meeting last week, a senior manager (who shall remain nameless) mused on whether he should provide ChatGPT generated meeting transcripts from the weekly MS Teams whole-department call we were just coming to the end of, for those who’d missed it to catch up on later (Yeah, right. As if). As he did so, I groaned inwardly. Again. And I have now vowed to myself that I will never read it if they do, because then we’d have that in common.

You, dear reader, being one of a very few hardy souls who have ever read things I write, either here, or in the few enclaves of social media I still bother to frequent, will know that I’m by no means a Neo-luddite¹, but I find myself wanting to yeet (as I believe ‘da yoot’ say now) a metaphorical sabot into the workings of any megacorporation² intent on intubating and force-feeding us the diet of reconstituted slurry of which consumer “generative AI” seems to mostly consist.

I don’t have a particular objection to targeted uses of specific technologies. So neural processing models that examine things like protein folding, or filter through tumour scans, then can later be further examined and checked by other, human eyes, are sensible uses of technology. If you have a specific problem, and an AI solution can be shown to bring real value, then I’m not one to come out against that. In fact, I’d say that’s all the more reason to concentrate on making that stuff better: more efficient, less energy intensive, and more accurate (in no particular order).

But the current wave of what is fancifully and almost entirely incorrectly called “AGI”³ is, if not boiling my piss, leaving it on a dangerous level of simmer. They are not all of course the same thing. There are the graphical models, most popularised by platforms like DALL-E, Stable Diffusion, and Midjourney. Their job now mostly seems to be to barf out weirdly uncanny vaseline-lensed, supersaturated poundshop Disney-lite pictures, sometimes with writing in indecipherable alien scripts, sometimes with unsettlingly spectral humans, with missing (or extra) limbs, faces painted by Edvard Munch on a particularly bad day, or not too specifically the required number of fingers. You know, the small stuff. However cartoonish the milieu there’s something quite dystopian or post-apocalyptic about the people pictured, like they’re trapped in some E-number fueled toddler’s nightmare. Then there are the LLMs, of which ChatGPT is just one. A friend of mine, who holds a chair in a humanities subject in a university somewhere in the north of the UK, was bemoaning a load of this stuff very recently, quite rightly. In particular he was taking a pop at a couple of products called scite.ai⁴, and jenni.ai which seem pretty much designed to remove any of the work of, you know, writing anything from writing essays. From an academic point of view, anyone submitting one of those to me when I was marking would have experienced one of two outcomes: labelling the work as having used cues would get you a zero. Not labelling it would get you hauled up as a plagiarism case. In neither case could you have claimed to have done the work yourself, though in the former case at you’d have at least been honest about it. Depressingly, institutions seem to be giving up the battle early, all but admitting that students will use these methods, without addressing the major issue: the outputs this stuff generates are shit. You might as well get AI readers to scan this, award random marks to cut out any of the inconvenient people bit, and save everyone the bother all round.

The Large Language Model of “AGI” is neither Generalised, nor is it Intelligent. It is an energy hungry, brute force statistical parlour game. Couple of recent stats I saw in passing talked about the resource intensive nature of this stuff. At present ChatGPT alone uses enough energy to power a single average US household for decades. Each individual ChatGPT prompt requires a half-litre of water in the server farms the models run in⁵. The energy and resource cost is huge, and growing, but what is the actual benefit?

Well, if you’re sitting in the boardroom at one of the major tech companies, it’s a new thing to sell to people as indispensable. If you’re Mark Zuckerberg at Meta, having just seen the company pour a huge amount of money into a “Metaverse” that hardly anyone seems to want, you need to find a new LLAMA⁶-shaped way to scrape cash away from your customers to keep rapacious shareholders quiet. Over at Microsoft, Satya Nadella went hard on LLMs and has basically packed their products so densely with this stuff you can’t fart without CoPilot asking if you’d like help with it. The fucking paperclip was bad enough, but this is a whole layer of annoying, and worse. Google are also playing catch-up, and have produced Gemini to drip feed you terrible suggestions on their increasingly enshittified search service. Apple are keeping their powder dry for now. They do have what appears to be a more more energy efficent model called ReALM in developement, designed to work more on-device, but what form this will take in the open will only start to become clear after the 2024 WWDC event next month. Everyone is selling LLMs hard, because the Silicon Valley tech industry is nothing if not an ongoing co-dependent death march once the newest silver bullet is identified. And this is filtering into the other parts of the corporate world, with CIOs pushing the new latest wonders to the minions below them. But they don’t seem to be asking why we should even bother.

The thing that strikes me at almost every level about this stuff is that it is the embodiment of an overarching and increasingly prevalent idea in the culture: “will this do?”. The infamous Glasgow Wonka debacle used a lot of this stuff in its promotion, using minimal effort to produce material that didn’t look like any of the weird artefacts the models produced had undergone any post-production human moderation or enhancement. Of course not: that would mean paying people to do it, who’d use actual effort, and cost money which would be better funnelled out as profit for someone who doesn’t need any more of it. The stories over the last couple of weeks or so about things like LLM generated cookbooks suggesting actively lethal ingredients for recipes, with no one bothering to check before they get spewed onto digital publishing endpoints don’t engender huge amounts of confidence in the quality of this output. And why would you use this stuff to write academic papers, or assessments, when what you get are generated texts with invented references that their human “authors” won’t bother to check for accuracy? If no one human is checking that, how much of the rest of that generated text can be trusted at all?

It’s insulting on so many levels. It’s telling me (as a reader, viewer, etc) that I don’t deserve that effort to be spent on me as the supposed target of that object. The principal motivation for what you judge appropriate for your audience appears to be a sort of contempt, the idea that you can palm them off with crap. In which case, if you can’t be bothered to put the effort into writing or creating, then why should you expect me to pay any attention at all to the thing that gets produced?

The best description of an LLM I ever saw was to call it a “stochastic parrot”. All it is is a combination of probability, linear algebra, and a body of content to work on that may (or may not) have been chosen with any care, or consideration for ethics. The end product of this process is just a bunch of characters thrown together by the laws of statistics. There is no understanding, no intelligence, no discernible human effort. And no quality control. It’s resource hungry. It isn’t at all reliable. But the fact is, for a small number of people, it’s currently profitable. It’s the thing no one really asked for, but we’re going to get anyway. Thanks, all you Silicon Valley tech bros. Thanks a fucking bunch.

¹ I think I’m much more of a sensible neophile. New stuff can be interesting, and fun. But, if we’re going to go there, Luddism was not a knee-jerk anti-technological reaction. It was a perfectly reasonable defence of working people against the introduction of technology that was designed to destroy their livelihoods, by people who didn’t particularly fancy paying them a fair amount for their labour. Luddism had a very particular purpose, and target. Look around you now. Some things never really change.

² No, Microsoft. I do not want to have CoPilot barging into every single interaction I have on my work computer, thanks. So kindly fuck off for me, would you?

³ Yes, it’s ‘Artifical’ alright. But It’s not what you’d called ‘Generalised’, and it’s certainly not ‘Intelligent’. So a one in three hit rate isn’t what you’d call great, is it?

⁴ This one made me snigger, especially if you use the Old English pronunciation, and understand the original route of where “sci-” words come from. Go and look at some Proto Indo European discussion for quite literal shits and giggles.

Estimated energy consumption stats by University of Washington were made in 2023, and in Nature in 2024 based on using ChatGPT 3. In the intervening time, ChatGPT 4 arrived and the number of queries has increased significantly, so these numbers are not outlandish in any way, even though I don’t have a direct source for them currently.

⁶ That’s the Facebook LLM. Cute, huh? Google’s was called Bard, but it’s now called Gemini because when it was first launched the balls it sucked were positively elephantine; it needed a rebrand, fast. Musk has one too, called Grok, because of course he does. but it’s extensively trained on the oh so reliable and well-balanced corpus of Twitter data, so you can imagine what a Clownshoes von ShitTheBed disaster area the outputs of that are.

--

--

septentrionarius
septentrionarius

Written by septentrionarius

Homo septentrionalis; merdarum politor

No responses yet