TikTok’s AI death prediction trend reveals something much deeper

Simon Kenny
7 min readSep 30, 2022

--

“TikTok influencer’s death”

A new phenomenon has caught the attention of sensationalist AI commentators: TikTok AI death predictions. Fads often sweep through TikTok, and this current trend appears to be much like the others at first glance. Participants use the AI Greenscreen filter, a moderate-quality text-to-image art generator, with the text prompt “my death”, and capture their usually horrified reaction in a short video.

We see other kinds of prediction queries asked of image-generating AI capture the public imagination, such as:

· What will the last human on Earth look like?
· The last selfie on Earth.
· What happened to Queen Elizabeth after she passed?
· The last thing you see before you die.
· How did I die in my past life?
· What will the Metaverse future look like?
· What does God look like?
· Draw a picture of the friendly ghost haunting my house.

It was perhaps the discussion of the “true horror story” of Loab that first sparked this recent interest in the spooky side of image-generating AI. Loab is the name given to a character that an artist Supercomposite ‘discovered’ in the image-generating AI Crayion (formerly DALL-E mini). The artist was able to reproduce this character in many images using a process they call a “negatively weighted prompt” .

Loab was interesting primarily due to its consistency of reproduction and the disturbing setting. The images generated by AI are based on a random process and are trained on a massive library of images, so consistency of output for words not referring to a famous person is rare. This fuelled speculation that Loab was real in some sense, or haunting the AI, a speculation made more compelling by the disturbing and violent imagery that accompanied the figure.

The mystery surrounding Loab has roots in a suspicion of the true source of the intelligence of AI. We see this also in the TikTok death prediction craze, but there is more going on here than meets the eye.

“Journalists on the internet”

The practice of asking AI predictive questions, even AI not explicitly created for prediction, is nothing new. This current wave of interest has been fuelled by the recent leap in the quality of text-to-image software known as ‘stable diffusion’. It’s a complex process, but, in summary, it involves training a neural network to progressively refine random visual noise (to clump and colourise dots into meaingful images) guided by a query prompt term (the input words). The resulting images far surpass previous attempts at image generation and have proved compelling simulations of human-like aesthetic goals.

To state the obvious, image-generating AI is not intended to make predictions. Predictive algorithms do exist, however, and they have been used for decades for insurance risk analysis, medical prognosis, stock market trading, and many other applications. For example, actual death prediction software has been developed and in use for a number of years and has caused controversy of its own. Obtaining accurate predictions, unsurprisingly, involves inputting a large amount of medical data on the individual. In other words, it is patient specific and dependent on domain-specific, highly organised data.

The current misuse of image-generating AI may stem from a misunderstanding by the general public on the distinction between AI and predictive analytics. One article attempts to clear this difference up by constrasting two definitions:

In machine learning [AI], algorithms are “fed” data and are asked to process it without a predetermined set of rules and regulations. Predictive analytics is the analysis of historical data as well as existing external data to find patterns and behaviors.

The output of image-generating AI is that of visually coherent and aesthetically appealing images. This output is a ‘natural’ result of its training rather than a goal it is trying to achieve. It, therefore, does not explicitly attempt to make predictions.

Much of the ‘spookiness’ of AI output relies on the twin appearances of its uncanniness and its apparent ‘higher’ connection. It is uncanny in that its output is very close to but not quite exactly like what humans can create, in what is known as the uncanny valley. The output is just close enough to be passably authentic but disturbingly different. It may provoke feelings of disgust or the appearance of alienness. In connection with the search for the spiritual, it may suggest something alien in a spiritual sense, such as spirits or other-worldly beings.

“Happy child with hands”

The appearance of a higher connection comes from a perception that AI has powers beyond what we can attribute to it with common sense. This is the ‘wow factor’ but to a level where the experiencer will report being unsettled or even scared. Many articles discussing these topics include words such as “creepy”, “terrifying”, and “scary”. For example, we may think, “the AI is just a computer program, how could it possibly know the future?” Even if we believe that prediction is possible with computer software, actually seeing it in action confronts us with the idea that an inanimate creation has been bestowed with super-human or even supernatural qualities.

After all, predicting the future has been seen historically as a supernatural ability which people have sought to access as far back as we know. People have always used what is available to hand to develop a divination practice, such as Tarot cards, tea leaves, throwing sticks or dice, or even the entrails of a recently sacrificed animal. As such, divination can be thought of as a pre-scientific technology. Why shouldn’t AI form another such practice?

Today, fortune-telling and related practices are seeing something of a boom, thanks to a renewed fascination with astrology, Tarot and witchcraft. The users of TikTok live and breathe this confusing mystical zeitgeist, but they are not so different from everyday folk of the past. The rejection of rationalism, and its associated scientific materialism, is probably best seen as a return to a well-trodden path that has reliably provided meaning to many. For those who reject traditional religion in favour of a more individual spiritual practice (in what has been called ‘unbundling’), there is a strong desire to re-enchant technology.

When we treat AI as a fortune-teller, it tests the metaphysical claim that something seemingly un-alive and disconnected can access ‘higher’ knowledge. Let’s take Tarot as an example of established fortune-telling (though it is also increasingly used for personal insight and growth). A sceptic might ask, “how can little pieces of cardboard, randomly selected and arranged, tell me anything about the future?” Fundamentally, it must be related to something beyond the cardboard of the cards to an intelligence or spirit of some sort for the formulations of supernatural believers to be legitimate.

“The metaphysics of Tarot”

If one is in any way open to this kind of thinking, it is not such a leap to suggest that AI is cast from the same mould. With Tarot, we select from a premade set of evocative cards with distinct (though interpretive) meanings. With an AI image generator, we select from the AI’s vast corpus of training images, now integrated into its data modelling structure, combined with our question phrase, and interpret the results. When this thinking is combined with a striking, uncanny and surprising presentation, it can be enough to provoke more than simple curiosity.

One potential explanation is that the intelligence is not outside the AI but that the AI is, in fact, more intelligent than it appears. Commentary on AI often centres on the question of just how intelligent it is and how we can really know if it has achieved self-awareness. For example, just a few months ago, former Google employee Blake Lemoine claimed that a Google AI chatbot had become sentient. Lemoine was fired from Google around the controversial claim for violating disclosure policies, and Google denies the AI is, in fact, self-aware. Are these AI systems more advanced than we realise? This is undoubtedly the insight at work here. If we, as intelligent beings, have access to the supernatural, what is to stop intelligent machines from accessing it also?

“Conscious machines discover God”

The Sun tabloid newspaper concludes one of their articles by dismissing the legitimacy of prediction, in what is intended as a dose of common sense:

Even though AI can create some disturbing images, there’s no need to worry about it.

The AI is basing its creations on information humans have given it and is in no way actually predicting the future no matter how many TikTok accounts claim it is.

Are we reassured? Or, to paraphrase the old UFO believer’s slogan, do we want to believe? Where we can read synchronicity, coincidence and the uncanny into the images AI produces, some of us will always be tempted to ask it questions. This might be just to see what it does, but the impetus to do this comes from a desire to see the world as ordered in such a way that there are no real coincidences. In other words, it implies an expansive teleology and a readiness to experiment with what is beyond our understanding in search of answers from beyond ourselves.

The images used in this article were generated by the author using Midjourney, one of the image-generating AI products under discussion.

--

--

Simon Kenny

Simon Kenny is an author, technologist and educator whose work combines probing questions with technical thinking. Currently exploring mysticism, AI and Tarot.