Artificial Intelligence, ChatGPT, and Transformational Change: A Brief Wet Towel–Rundown Likely to Piss Off Nearly Everyone Currently on Fire

J. Martin
14 min readFeb 7, 2023

--

Aerial photo of a large circus with tents and trucks.

Departure

Back in 2016, when AlphaGo beat Lee Sedol in a five–game match of Go, there were two moments of magic. One was AlphaGo’s creative and unique move 37 in the second game. The other was AlphaGo’s reaction to Sedol’s creative and unique move w78 in the fourth game. As for the first moment, AlphaGo’s move 37 solidly satisfied established definitions of creativity, including the extended definition I work with in my lectures and my Ludotronics methodology: that a creative idea is the product of persistence, constraints, and an unfamiliar combination of familiar ideas, and that the result should be original, interesting, and relevant. As for the second moment, AlphaGo’s reaction in human terms along moves 79–87 went from late realization to subsequent panic to eventually pulling itself together. Under the hood, of course, things were going awry in analyzable ways, particularly in AlphaGo’s value network. Thus, “late realization,” “panic,” and “pulling together” are pure metaphorical descriptors or perceptual illusions, depending on how you look at it. But there’s a caveat: as soon as we describe the activities of a human brain strictly at the level of its neural network and associated systems, “late realization,” “panic,” and “pulling together” turn into metaphorical descriptors or perceptual illusions as well.

We’re living in transformational times, but these times are not as transformational as some people want to believe or profess to believe.

So yes, with enfolding AI technologies since AlphaGo, we live in interesting times in many respects. Unfortunately, however, these times are not as interesting as some people either want to believe or profess to believe. More specifically, we’re living in transformational times, but these times are not as transformational as some people either want to believe or profess to believe. Climate change, certainly, is an exception. Climate change will be terminally transformational if not acted upon in moon shot fashion very, very soon. That climate change isn’t acted upon in the way it needs to be acted upon, alas, has the same underlying reasons why the times we live in are not as transformational as they could be, or should be. What I mean by that will become clearer a few stops down the road.

Arthur C. Clarke famously wrote that any sufficiently advanced technology is indistinguishable from magic, an old adage bandied about more often than warranted, or useful. For us, in our context, a better fit is a book title by the Canadian science fiction writer (and blogging/social media friend of yore) Ira Nayman: What Were Once Miracles Are Now Children’s Toys. This part too will become clearer soon.

Now that tickets are checked and everybody’s comfortably seated, let’s travel to the land of large language models, ChatGPT, and their many siblings.

First Stop

Right off the bat, we need to answer a question that is mildly interesting at best but important to address: is ChatGPT intelligent in the popular sense of the word? No. ChatGPT is not intelligent in that sense. It doesn’t have the faintest clue what it’s doing. Every time you open the hood of a news item that credulously presents a miraculous feat in raging overdrive, you will find that what actually transpires runs on rather low RPM every time. ChatGPT writes your essays and papers? Well, the more you know about a topic, the more apparent it becomes that ChatGPT’s output for increasingly specific, non-trivial questions drifts toward what someone aptly called Kafkaesque garbage. And if you confront it with specific facts it got terribly wrong, it even begins to gaslight you. It also doesn’t know what citations are, or what they’re for; it only knows how citations look like. Then, did ChatGPT pass the U.S. Medical Licensing Examination? No, ChatGPT didn’t pass the USMLE; it was tested on publicly available USMLE questions from past exams and yielded “moderate accuracy approaching passing performance” (italics mine). Did ChatGPT pass the Google coding interview for an entry level position? Apparently yes, according to an internal Google document, but the interview relies on technical questions readily retrievable from public sources. Or did AI, beyond ChatGPT, suggest 40,000 new possible chemical weapons in just six hours? It did, but its “suggestions” are utterly unconfirmed, absolutely riddled with false positives, hard to test, and even harder to synthesize (and virtually impossible to build by unauthorized actors under current regulations).

ChatGPT can execute vast swaths of business communication, fact-regurgitating exam questions, or college essays—abilities both convenient and useful for humans, even liberating.

What about the things ChatGPT gets right beyond technical questions and assistance, like office and business communication, multiple-choice tests, off-the-shelf essay topics, and similar tasks, with only minor hazards attached? If you look at this kind of output very thoroughly, you will realize that it’s one-hundred percent bland, non-specific, and serially predictable. (Particularly the latter is not easily alleviated, as prediction accuracy based on previous contexts is an important part of its mode of operation, and primarily responsible for the illusion of expertise/understanding.) Yes, ChatGPT can execute vast swaths of business communication, fact-regurgitating exam questions, or college essays, abilities that will be both convenient and useful for humans, even liberating. But that doesn’t exactly speak of its “intelligence.” It only speaks of the suffocating blandness, non-specificity, and serial predictability of large parts of our business communication, exam questions, or college essays.

ChatGPT’s output can be so bad that it’s been called Mansplaining As A Service.

Beyond that, effectively, ChatGPT’s non-technical output can be so bad that it’s been called Mansplaining As A Service: “A service that instantly generates vaguely plausible sounding yet totally fabricated and baseless lectures in an instant with unflagging confidence in its own correctness on any topic, without concern, regard, or even awareness of the level of expertise of its audience.” For ChatGPT, as for humans, anything seems possible if you don’t know what you’re talking about, or what you’re doing.

Granted, in terms of not knowing what you’re doing, one can make the very good argument that humans don’t know what they’re doing either, and I don’t mean that facetiously. While we can’t yet rule out the hypothesis of “free will,” many of its spurious argumentative layers have been peeled off by neuroscientific and philosophical inquiries over time. Why do we visit the refrigerator at 3am, even though we’ve been hungry for hours? We don’t even know that much with certainty, let alone why we make more complex decisions — beyond the colorful stories we tell ourselves to make our decisions make sense ex post facto. Indeed, there’s reason to assume that the question of free will is not only a red herring, often monotheistically motivated, but that the adjective “free” can no more modify the noun “will” than it can modify “sadness” or “blood pressure.” But we humans know what a fridge is; we know why we went there; we can reflect on why we didn’t go earlier despite our growling stomachs; and we can counterfactually imagine we went there for a snack three hours ago. And we know what a grocery list is, not just what grocery lists look like. That’s an incredible amount of mileage we get out of the neural networks of our brains, even if we generally don’t happen to know why we do what we do.

Second Stop

Then, let’s answer a second question that is already more interesting: is “intelligent” AI in the popular sense right around the corner? Appearances notwithstanding, that’s still a question for tea leaves reading, and the reasonable answer to it should be no. “Intelligent” AI is not around the corner, provided you’re not in close pursuit of the ever-shifting goalposts of what “intelligent AI” supposedly means.

Admittedly, the definition of “intelligence” is still up for grabs even for humans; previous definitions have an abysmal track record that covers everything from racism to classism to cluelessness. So yeah: you can define and redefine “artificial intelligence” or “artificial general intelligence” or whatever label appears en vogue enough to your heart’s desire, and mean by that what you want it to mean. But if you buy into it that intelligent AI as “strong” AI (which includes subjective experiences and understanding and some equivalent to thoughts) is right around the corner, I have a fertile plot of land on Mars to sell you.

Remember, six or seven years ago, when everybody got screamed at (metaphorically) who argued that fully autonomous self-driving cars, particularly in cities, are absolutely not right around the corner? Even a cursory glance at the underlying technologies did tell you that at the time, and it certainly should tell you now. This you can map, many times over, to the current discourse on impending “intelligent” AI.

Third Stop

Finally, let’s answer a third and indeed very interesting question: will current AI technologies change our lives? Absolutely. They will change our lives in terms of an extension of our tool kits in almost every field of human activity, and they will change our lives through an intensification of exploitation in almost every field of human activity. These two parameters open up a coordinate system where we can create prediction points through research and imaginative thinking — but it’s hard to read anything off of it right now, as its conceptual space-time is being warped into pretzel-shape by the sheer mass of futurity consultants and capital investments banging into it with a vengeance.

However, at this moment in time, we can at least say this. Today, figuratively speaking, these technologies will be implemented in our design tools and graphics editors and search engines and word processors and translation/transformation apps and game engines and coding environments and digital audio workstations and video production editors and business communication platforms and diagnostic tools and statistical analysis software in everything everywhere all at once fashion with the possible exception of our singularly immobile educational systems, and we will work with them without batting an eye once the novelty value’s worn off. And by tomorrow, what were once miracles will have become children’s toys.

Image-generating AI was trained on “public images around the web,” which is a polite way of saying that they took the works of artists without their consent.

But make no mistake. As one outstanding reason why these technologies won’t have the transformational impact one might wish for, the intensification of exploitation has been baked into these technologies too from the get-go, be it DALL·E or ChatGPT or VALL-E or MusicLM or neighboring applications under different names and brands. Was image-generating AI trained on Mickey Mouses or Toms and Jerrys? Of course not. When imagery from such sources popped up, it originated from fan art that Disney or Warner habitually take down via copyright claims with abandon. (The only exception so far seems to be Stability AI, who—apparently as a death wish—scraped millions of images from extortionist and copyfraud perpetrator Getty Images, of all companies.) Primarily, image-generating AI was trained on “public images around the web,” which is a polite way of saying that they took the works of artists without their consent. A train, predictably, that Adobe couldn’t board fast enough: its cloud users’ artworks have already been scraped since August last year, after an unannounced automatic opt-in. The same, only less obvious, happens to writers. And journalists: as it turned out, CNET’s AI-generated articles were not only riddled with errors but also substantially plagiarized. And open source coders: whose work is used, but precisely what is used is kept secret to make it harder for them to sue. And scientists from all fields, humanities included: they find themselves caught between the rock and the hard place of Elseviers and ChatGPTs. They can decide now how they want to be exploited, either the old-fashioned way by coughing up dearly and have their work locked behind paywalls, or in new and exciting ways by seeing their work freely distributed and applied (and garbled) without attribution or recompense.

On larger scales, naturally, these dynamics will impact workplaces and entire industries, which has already started, and the economy as a whole. Which jobs will be impacted by these technologies? How many new jobs will be created and what kind? Will more jobs be annihilated than created? And which? It’s something we cannot know yet; economies, micro to macro, are complex beasts. And with new technologies, the historical perspective carries you only so far. There are three basic options to predict what will happen. You can build models, supported by AI, of course, and try to project the impact on certain classes of jobs or industries or markets. This is hard. Then, you can create fictions (including forms of scholarly non-fiction) with highly imaginative yet rigorous as ifs, to play through certain scenarios and see where they lead. This is also hard. Finally, excuse my French, you can pull economic predictions right out of your derriere to look important in bylines and on social media, which is easy. Your pick!

What isn’t hard to predict in the absence of true transformation, however, is the proliferation of avenues for exploitation. Some have already been opened, more are being paved and prepared to receive traffic soon, and even more will be newly constructed in years to come.

On-Board Entertainment

One reason why all this is not glaringly obvious is the dazzling and distracting bombardment with AI sideshow acts. Where endless streams of parlor tricks and sleights of hand are presented, from fake Kanji, robot lawyers, and crypto comparisons to made-up celebrity conversations, emails to the manager, and 100 ChatGPT Prompts to Power Your Business. Through all that glitter and fanfare and free popcorn, many people don’t notice — or don’t want to notice or profess not to notice — that the great attraction in the center ring is just business as usual, only that the acrobats have been replaced by their likenesses and no longer need to be paid.

It’s just business as usual, only that the acrobats have been replaced by their likenesses and no longer need to be paid.

And then there’s a second layer of obfuscation, the nonsense barrage about how Artificial General Intelligence will someday deliver us from capitalism’s problems. In an exclusive Forbes interview, “OpenAI’s Sam Altman Talks ChatGPT and How Artificial General [!] Intelligence Can ‘Break Capitalism,’” Altman wraps everything neatly around a well-worn s(ch)tick: “I think capitalism is awesome,” he says, “I love capitalism. Of all of the bad systems the world has, it’s the best […] we found so far. I hope we find a way better one. And I think that if AGI really truly fully happens, I can imagine all these ways that it breaks capitalism.” But wait — Altman doesn’t think that “we’re super close to an AGI.” Which is truly fully strangely convenient, if you ask me — especially after it came to light that OpenAI used Kenyan workers to make ChatGPT less toxic, under working conditions of “traumatic nature” at take-home wages of “between around $1.32 and $2 per hour depending on seniority and performance.” And I’ll spare you the detail why the Kenyan contractors terminated their business relationship with OpenAI early; you’ll have to read that for yourself. Unsurprisingly, moreover, “all these ways” in which AGI breaks capitalism are left truly fully unspecified in spite of the article’s intriguing headline.

That’s where we are. Entertained by circus acts and pacified by cotton candy, it takes a while to register that the main attraction we’re waiting for, The Great Transformational Change, will not take place.

Terminal, or: Rear Mirror Future

From all the things we need to keep an eye on, without letting our attention being terminally redirected to the spectacular parade of entertaining sideshows, two are particularly worthwhile and important.

One thing we need to keep an eye on is how AI will be openly or surreptitiously applied to rig the system even more — toward exploitation, surveillance, systemic racism, resource hoarding, and outright fascism. Our democratic systems are very fragile and under immense pressure already, and we can expect that both local and global ramifications of the approaching climate catastrophe will push them to their breaking point sooner than we think.

The other thing we need to keep an eye on are narrow fields that might be complex, but can still be approximately charted in terms of rule sets or fact sets, traceable cause-and-effect patterns, and definable outputs. It’s here where AI will most likely develop its most interesting and, hopefully, most promising and awe-inspiring abilities — as transformations from the margins, if you will.

AlphaGo was certainly such a transformation from the margins. In 2017, an advanced version of it was put online. Not only has it beaten all the world’s best players since then; the handicap necessary to beat it also seems to be growing. Did AlphaGo’s successor keep coming up with creative moves like move 37? According to reports from expert players and their different points of view, not so much. It’s playing style’s been called rebellious, its moves “strange ideas,” even “most smelly and messy” (最臭最难), and, quite often, “looking weak,” as if AlphaGo were an amateur who didn’t understand the game’s well-established, complex formulas. (But in the end, its moves always prove to have been valid and not wrong.) Also, its standard tactic has been described as gaining an advantage with moves that appear erratic and incomprehensible to expert players, and simplifying the board as soon as that advantage is established.

The point’s been made that AlphaGo is so far ahead in terms of “judging the situation” that humans might no longer be able to compete with it.

From there, in the context of its incomprehensible moves whose true value humans cannot figure out until too late, the point’s been made that AlphaGo is so far ahead in terms of “judging the situation” that humans might no longer be able to compete with it. (On a side note, Choi Jeong and Park Yeong-hoon’s assessment that “AlphaGo is not bound by any prejudice, so it can play Go as it wants” reminds me, predictably, of a certain movie scene.) Finally, there’s this interesting remark by Shin Jin-seo: “[AlphaGo’s successor] doesn’t make surprising moves, so it’s not easy to tell if a human is playing or an AI. No complicated battles are necessary, it wins extremely easily.”

All this sounds indeed as if AlphaGo’s creativity has tapered off, one way or another. If that’s the case, however, my personal take would be that for AI too, necessity is the mother of invention. As superhuman as AlphaGo has become in its complex but tractable field of expertise, it might have simply no need anymore for creative moves. So if we want to design a challenge and rev AlphaGo up into creativity mode again, we’ll have to invent something new. Which, how could it be otherwise, will involve advances in AI.

These are the dynamics to watch out for, in tractable fields with reasonably defined rule sets and fact sets, approximately traceable causes and effects, and reasonably unambiguous victory/output conditions. Buried beneath the prevailing delusions and all the toys and the tumult, they won’t be easy to spot.

Circus courtesy Sergio Souza. Kafka sculpture courtesy Aviv Perets. Pathfinder courtesy NASA/JPL. Display courtesy Tima Miroshnichenko. Cotton candy courtesy Eduardo Carvalho. Shintō ema courtesy myself. All images are in the Public Domain.

--

--