AI Creativity & Elon’s Pocket Singularity

The Mill Experience
12 min readDec 15, 2022

Matt Pearson reflects on the hyperbole around the latest “AI” creative tools and how they might have a bearing on the rumblings over at Twitter…

Is “AI” now the most redundant prefix in tech? Overused to the point of meaninglessness. It’s certainly giving “digital” a run for its money.

There’ve been a new breed of creative tools trickling out over the last few years with massive potential to change how we work in the creative industries. This month it’s ChatGPT, the latest in a line of language tools from San Francisco company OpenAI, that has entered the collective consciousness. Launched at a time our feeble, biological brains were still struggling to absorb the new aesthetic of the ML-based image generation tools (MidJourney, DALL-E, StableDiffusion) that preceded it.

Technological change can bring fear and uncertainty, as well as excitement. So most reactions to this tech, from expert to layperson, have been some mix of these three. And thrown casually into this brew is the term “AI”. Which probably doesn’t help.

https://twitter.com/yezzer/status/1599508815719776256

So I’m going to take a moment to describe the scene from my desk, as a coder on a creative team. With alien intelligences hammering at the door, here to steal my job, end humans, and accelerate The Singularity, somehow I still sleep okay. Perhaps, with my help, you can too.

Someone Save Us From The Labour Saving Tools

Fear of AI is endemic in tech circles. A natural consequence, perhaps, of living in a world that was only really discussed in science-fiction terms when we were growing up. There’s an apocryphal story about the well-known Silicon Valley founder who makes a deliberate effort not to make any negative comment about AI because he’s terrified that, in the future, he will be held to account for his words by our new overlords.

I don’t know who that story is about, but it’s not Mr Musk. He talks about AI often, describing it as the “biggest existential threat” of our times. This is Elon:

“If AI has a goal and humanity just happens to be in the way, it will destroy humanity as a matter of course without even thinking about it…It’s just like, if we’re building a road and an anthill just happens to be in the way, we don’t hate ants, we’re just building a road.”

Sci-fi and futurology has long spoken of an approaching “singularity” — the point where a general machine intelligence supercedes human intelligence. Post-singularity we are no longer the dominant species, technological growth explodes without us in the way, and we are either enslaved, wiped out, put in jars, farmed for our juices, or whatever else tickles your amygdala.

There is also the shorter term, mill-burning worry that AI is coming for our jobs. That a load of white collar work will soon be as redundant as all those hobo ex-truck-drivers begging for food outside the driverless car wash.

Yes, it’s ridiculous that the idea of the human race having to work less in the future is now seen as dystopian. AI replacing our jobs could, with a more glass-half-full attitude, be seen as a good long-term goal for us to work toward collectively. But, unfortunately, we’ve still a number of snags to iron out with this slightly buggy install of the Capitalism beta. So it’s not unnatural for people to fear for their livelihoods.

This in mind, my ears pricked up when some of the recent talk became about my job. The latest version of OpenAI’s text tools can write code, and it’s caused some existential musing amongst my colleagues. Does this mean we’re about to be put out to pasture?

Code, By Code, For Coders

We’ve seen algo-written code before, it was just a bit crap. Now it’s better. I had a genuine wow moment last week when I gave OpenAI’s davinci-003 model the prompt “write Processing code for a Pong game”, and it returned a chunk of code that not only compiled and ran, it was … Pong.

Creating human readable text has always had a much lower bar than writing machine readable text. We humans are quite generous in our interpretation of what we take in. We are prone to a phenomenon called pareidolia, a tendency to seek patterns. Our sensory input will fill in gaps, paper cracks and, generally, award the benefit of doubt on a subconscious level. This is why we see faces in clouds or images of the Madonna on a slice of toast.

https://en.wikipedia.org/wiki/Pareidolia

Our brains are wired this way at a deep level. So if we read a sentence with some sloppy grammar and spelling mistakes, we are forgiving because we look for the patterns we want to see.

Computers don’t work like that. They don’t have such biases, they just interpret what they are given literally. As literally as my three-year-old that time I told him to “go wash your hands in the toilet”.

So, when OpenAI can produce a chunk of code that not only compiles and runs, but also kinda does what it’s meant to do, that does seem remarkable.

But this isn’t intelligence. No mind is writing this code, it’s just a (very clever) search engine, working on a corpus of all the code on the net. It’s not threatening my job, it’s making my job easier. It can’t do it on its own any more than a hammer can hit a nail without someone to swing it. I can’t tell it to write a AAA PlayStation game, and go to the pub for the afternoon. But it can save me having to write a smoothing algorithm again for the 500th time.

Without wanting to ruin the magic, we coders don’t keep it all in our heads. I’ve been coding more years than I care to admit and I still have to google how to declare an array. Every. Bastard. Time. We build upon the existing knowledge, which is well organised on the internet. Coders are, in essence, biological machines for converting Stack Overflow posts into applications.

ChatGPT, and its friends, want to help us with this. These are human-created tools, designed to serve human desires. Their understandings of which are reassuringly limited.

Stoopid AI (via Kev)

The End Of Art, Again.

Similarly, over in the realm of “AI” image generation, you can hear the distant wails of the “end of art”. How this new set of tools will render creativity redundant, because we just type whatever we want into a prompt box and a machine will create it for us. But this is no more the “end of art” than the invention of paint.

It is, again, just a new form of search. It is finding results within a probability space, and (cleverly) mashing up a visual from what’s there. Images returned from prompts are not really “new” creations, they are an average. They’re an approximation, based on what has come before, of what the machine thinks we mean by “man sitting on the toilet embarrassed to be covered in melted cheese” (thanks Annie).

Art has never been a quest for a lowest common denominator, for the average of all the world has seen before. That’s not art, that’s Saturday evening TV. Art is the exact opposite of that. It is a search for novel and appealing ways of seeing.

You might argue that Man Sitting On The Toilet Embarrassed To Be Covered In Melted Cheese is doing this — that this has a style that is new to us, and so can be considered a fresh perspective. And, secondly, that in these images you might see some mashups you’d never have imagined before.

The style argument might pass a cursory critical appraisal, until you consider what exactly it is that you see as stylistically unique/interesting. It’s the mistakes.

Park your pareidolia for a second, and you’ll see lots that’s “wrong” in the image. The man sits in the toilet, not on it. The seat behind him is an odd shape. The cheese is, in places, closer to egg. Were this a human artist we would ascribe meaning to these decisions — “what is the artist trying to say here?”. But, generated by a machine, this is not meaning, it is simply bugs.

It is interesting because it is failing to meet the brief. The tool is intended to be better than this. But we see charm in its stupidity.

The second argument is stronger. Although, the biggest reason this mashup of cheese, toilet and embarrassment has never been visualised before is not because it was impossible to conceive of. It’s simply because, previously, no-one had ever been arsed. All that’s happened is we now have a new labour saving tool to remove that tedium. The space can be explored a lot faster.

A designer, given the cheese brief, can’t pass this image off as job done, and go join the coders in the pub. But it has saved her some time, and spared a disappointing search through a stock photography site. It’s given her something as a starting point. It’s thrown another sausage at the wall. In short, it helped.

There was a piece in the Guardian last week where they “asked six leading artists to make work using AI — and here are the results”. The results, if you’ll allow me a little critical licence, were utter shite. No-one comes out of the story very well; not the artists, the tech, or the author trying to find a positive spin on it. Possibly because of the cynical premise that you can just throw any old crap at these tools and it will do you an art.

we asked six leading artists to make work using AI

Don’t get me wrong, this is a super-interesting space, ripe for experimentation. I’ve seen some beautiful work, extrapolated from the data set of all human art. But it’s not an end, it’s just another beginning. It’s a new tool that, very quickly, will be taken for granted as something else computers can do.

Unseen architectures imagined by w:blut + MidJourney
Golem Practice by shardcore + Stable Diffusion

The Singularity Happened Last Wednesday

The ultimate destination of these “AI” tools is mundanity. Image generation will soon be just another drop-down menu option. ChatGPT will be the booking system you have to outwit to get a doctor’s appointment. And we’ll have moved on to spouting fresh hyperbollocks about whatever is new that week.

Algo-text is already widespread. You’ll have encountered it in online news sites, targeted spam, customer service bots, etc…, probably without even noticing.

It’s a cheap way of creating content, but it’s debatable whether it has cost jobs in those areas. It hasn’t, necessarily, reduced the demand for human-crafted textual content. It might even have increased it, by raising the water level of the shite our messaging needs to reach above.

And of course algo-text is rife in that most fecund of algo-petri-dishes. The one place made-up-nonsense will always have a home. Twitter.

Twitter are very private about exactly how many “fake” accounts they have on the platform. The estimates range from 5% (Twitter’s sales dept) to 100% (solipsists). It will vary radically by context, but it is easy to imagine areas where it could be more than half. The first of the dramas around Elon Musk’s purchase of the platform was around his doubts over Twitter’s 5% claim.

https://twitter.com/PPathole/status/1526958854642212867

Fake accounts are not just bots. They can be, as we discovered in the wake of the Muller Report in 2019, simply meddlesome Russians pretending to be MAGA moms weighing into debates about #BlackLivesMatter. In that particular instance it was mainly actual humans creating the content, not bots. So the latest AI-text tools may be threatening those jobs. But I’m not sure that’s going to upset many.

A decade ago I wrote a piece for CAN about algo-tweeting, back when “AI text” was all Markov chains. In it I (facetiously) described Twitter as a “land of the dead” where, if you threw a stone, you were more likely to hit something non-human than a creature that bleeds. I speculated that the ultimate fate of that social network would be bots talking to bots, in a techno-circle-jerk, with humans no longer involved.

Twitter might be thought of as a global-scale Turing Test — the “imitation game” whereby an intelligence is tested in a limited medium, e.g. by passing slips of paper under a door. Millions of real life humans fail this test every day, whilst millions of algo-text bots can pass it. This is partly the problem. The reason Twitter can’t simply remove the non-humans from the platform is that the two are effectively indistinguishable.

So now Twitter is owned by this prominent personality who keeps popping up in discussions around AI. That guy warning of an AI singularity whilst also, incidentally, being one of the founders of OpenAI. And Mr Musk’s “restructuring” at Twitter appears dead set on making the dream of a fully-automated-luxury-bot-network come true.

This sea of bots may not marry with your experience of the platform. It’s not really mine either. But this is only because, as with our AI tools, it’s dependent on us putting in at least some of the work. I’m human, and like other humans, so I’ve curated a feed that reflects this. It doesn’t mean there aren’t other bubbles where that’s not the case.

Elon Musk describes himself as a “free speech absolutist”. Which is a fine principle, in theory. But in practice, applied to such a nuance-free platform as Twitter, the greatest beneficiaries of easing content moderation will be non-human actors. We should not underestimate the behind-the-scenes effort that goes into maintaining the illusion that Twitter is a functional space for reasoned human discourse. Open the gates a crack, and immediately the crap will begin to rise.

The gift of these “AI” tools is scale. Just as we can now explore larger design spaces, or generate mountains more marketing blurb, we can also easily create exponentially more Twitter posts. Which are indistinguishable from human-generated content. These bots can happily talk and talk, not caring whether they are being seen by real eyeballs or only other bots.

It’s possible, indeed likely, that a “singularity” could already have happened on Twitter. That the bots have overtaken the humans, and have formed a ‘society’, happily chattering away, serving only their own needs.

But this is fine. They are not a threat to us. It’s a singularity that happened in a nice, discrete space — Elon’s $44billion pocket universe — that offers no threat to the outside world.

We have nothing to fear from a singularity in a box. I mean, why do we expect AI to have any interest in our ‘real’ world, anyway? I never really understood that bit.

So let’s not fear the “AI” tools. Let us laugh in the face of the singularity. Let’s get back to dreaming of a future where AI does all the work, and we can all just party on their profits.

Okay, we still have to fix capitalism first. But that’s a problem for another day.

--

--

The Mill Experience

New Worlds. New Stories. New Experiences. We pioneer storytelling at the forefront of immersive technology.