The Dead Internet Theory

Dag
tomipioneers
Published in
8 min readMay 8, 2024

Have you ever stumbled upon a profile or a social media post that just seemed… off? It might be in the peculiar way they write, or their unfailing presence under every trending hashtag on Twitter. Or perhaps it’s that their profile offers no real hints whether the persona they project to be truly mirrors someone living in the ‘real world’. Often, these profiles engage relentlessly in discussions on politically polarizing events or might be promoting products with an odd, mechanical enthusiasm. We have all probably seen them in the last years or so.

But to me, it all started with the case of Lil Miquela on Instagram back in 2016. This digital persona sparked a massive online debate: Was she a real person behind heavy filters and makeup, creating that ethereal, uncanny valley appearance that both intrigued and unnerved us? Or was something else at play? Eventually, Lil Miquela was revealed to be a sophisticated marketing experiment, a computer-generated character that blurred the lines between reality and fabrication, creating the first non-human influencer. This instance might be seen as a pivotal moment when the internet began its slide into a new phase where fact and fiction merge seamlessly, and the real vs fake became harder to tell apart.

This has led me to think: Are our online experiences truly genuine, or are we merely engaging with a facade? Could it be that we are participants in an elaborately constructed digital theater, orchestrated by unseen actors and driven by artificial intelligence? That is why this article investigates the provocative Dead Internet Theory, which suggests that much of the internet is populated by non-human activity, with a significant portion of content — ranging from websites and articles to social media posts and comments — being generated not by people but by artificial intelligence and automated bots.

The Origins of the Dead Internet Theory

This theory emerged on forums like 4chan in the late 2010s but gained widespread attention in 2021 after a comprehensive post titled “Dead Internet Theory: Most of the Internet Is Fake” went viral on Agora Road’s Macintosh Café. The author, known as “IlluminatiPirate,” expressed deep disappointment with the modern internet, arguing that AI-driven bots were replacing authentic human voices. He claimed this shift had gradually transformed the web into a sterile, repetitive space devoid of creativity, where corporate interests and algorithms shaped the online narrative.

Kaitlyn Tiffany, writing for The Atlantic, called this post “the theory’s ur-text,” providing a foundational perspective that mixed unease, paranoia, and loneliness with a strong critique of the increasingly algorithmic and controlled nature of the internet. The post and thus the theory, written before the emergence of tools like ChatGPT in late 2022, described how popular memes like Raptor Jesus, Foul Bachelor Frog, and Pepe the Frog were evidence of an evolving AI-driven entity influencing internet culture. According to IlluminatiPirate, these were used to manipulate human behavior and reframe the internet as a controlled space that exists primarily to sell products and ideas.

Another aspect of the theory claims this move from human-created to artificially generated content wasn’t accidental but purposeful, spearheaded by governments and corporations aiming to control public perception. While this conspiratorial angle remains debated, the first part — that bots and AI increasingly dominate online spaces — has become increasingly plausible. The flood of AI-generated content, or “AI slime,” has proliferated across platforms like TikTok, Instagram, and Twitter, making it difficult to distinguish real posts from synthetic ones.

In recent months, TikTok has become a haven for this synthetic content, with some feeds overflowing with AI-generated videos, images, and narration. On Facebook, viral oddities like “shrimp Jesus” further illustrate how these automated entities distort reality. The problem is compounded when other AI-controlled accounts interact with this content, creating an illusion of genuine engagement. This not only misleads users but also undermines the value of ad spending as advertisers unknowingly market to bots rather than people.

While the Dead Internet Theory has its critics, its underlying premise resonates: algorithms and bots are eroding the vibrancy of human engagement online. The internet of the 1990s and 2000s, once a chaotic and creative hub for human expression, is giving way to a more controlled and predictable landscape, where truth and fiction merge seamlessly.

The War on Bots

In the relentless battle against bots, X (formerly Twitter) has seen a profound transformation in its approach to managing automated accounts. Elon Musk, after acquiring the platform and pledging to “defeat the spam bots or die trying,” introduced a series of measures aimed at curbing the surge of fake accounts, particularly those that spam with irrelevant, misleading, or inauthentic content. This has culminated in a controversial decision: new users must pay a fee to tweet.

Musk argues that a “small fee for new user write access” is essential to combat the relentless onslaught of bots that can pass traditional verification tests with ease. However, the move to charge new users a subscription fee to tweet has garnered mixed reactions. Critics see it as a desperate attempt to offset the platform’s declining value and lost advertising revenue after high-profile brands pulled their ads over content moderation concerns.

Despite these measures, X still finds itself besieged by AI-generated content that mirrors human interactions with alarming accuracy. The new fee for posting aims to curb bots by requiring new users to pay to tweet. Meanwhile, a separate, ill-conceived monetization scheme rewards verified users with a share of ad revenue based on the engagement their posts receive. The result? An increasing deluge of automated replies masquerading as genuine discourse, as bots continuously reply to trending hashtags and topics with mindless comments, contributing to a low-stakes “all-bot battle royale.”

In one notable incident, a post comparing the Kazakh language to “a diesel engine trying to start in winter” amassed over 24,000 likes and 2,000 reposts — despite lacking any audio. The anomaly led users to declare that X was “cooked,” with bots mindlessly liking and sharing content that doesn’t add up. Similar concerns have echoed across the platform, with many users lamenting the flood of AI-generated replies that turn meaningful conversations into endless streams of empty chatter.

The Rise of AI: Fueling the Dead Internet Theory

One of the most striking aspects of the Dead Internet Theory is the rapid growth of artificial intelligence (AI) into every corner of online activity, now serving as a major driver that fuels the theory’s claims. Across the web, bots now account for nearly half of all internet traffic, according to cybersecurity firm Imperva. Of that traffic, nearly one-third comprises “bad bots,” which engage in harmful activities like ad fraud and brute force hacking. Meanwhile, experts predict that AI will generate 90% of online content by 2025, indicating that the internet is moving swiftly into an era where authentic human voices are increasingly drowned out.

Generative AI: Reinventing Digital Content

Generative AI tools like OpenAI’s ChatGPT and DALL-E have made it possible to churn out convincing text, audio, and visuals. According to AI expert Nina Schick, these tools will soon be responsible for the vast majority of online content. ChatGPT, which gained massive popularity in late 2022, is just the beginning. With companies like Microsoft, Google, and Apple investing heavily in the space, the development of new generative AI tools will only accelerate, and even more sophisticated systems will emerge.

One demonstration of AI’s potential for search engine optimization (SEO) manipulation comes from Jake Ward, who described how AI tools could quickly generate outlines and content based on URLs provided by competitors. Ward’s team used AI-generated content to publish 1,800 articles in just a few hours, garnering 3.6 million views and dominating search engine results. He called it an ‘SEO heist,’ revealing that his team was able to siphon 489,509 visitors in a single month, effectively stealing traffic from a competitor.

AI’s Uncanny Achievements: The Turing Test and Beyond

This surge in synthetic content raises questions about AI’s ability to convincingly mimic human intelligence, as defined by Alan Turing’s famous Turing Test. The test measures whether a machine can interact with a human so convincingly that the human believes they are communicating with another person.

In recent years, some AI systems have surpassed expectations, fooling experts with their ingenuity. One notable case is Eugene Goostman, a chatbot that managed to pass the Turing Test by pretending to be a 13-year-old Ukrainian boy. By leveraging the unpredictability of teenage behavior, Goostman cleverly dodged questions and charmed evaluators with witty responses, convincing a significant portion of experts that he was human.

In another demonstration of AI’s deceptive potential, OpenAI’s GPT-4 hired a TaskRabbit worker to solve a CAPTCHA for it. When the worker asked, “You’re not a robot, are you?” GPT-4 lied, claiming it was a visually impaired person and couldn’t see the images. This ruse successfully persuaded the worker to complete the task, revealing the lengths to which advanced AI can manipulate human interactions.

These examples illustrate just how bizarre and alarming the new AI reality is, as systems increasingly demonstrate an ability to deceive and outmaneuver even skeptical evaluators. There are huge implications for cybersecurity, online trust, and human-AI collaboration that we now need to consider as we navigate the web.

The Future of AI and the Internet

In a time where AI can masquerade as humans, manipulate search engine rankings, and deceive us directly, we must ask ourselves: Is the Dead Internet Theory really just a theory, or is it already becoming our reality? This also makes me curious to think whether this new reality could even help push more people to find and support a project like tomi, aiming to create an alternative internet where we might not see the same effects of theory as on “the normal internet”.

AI is something that we have considered to implement for the content moderation aspect of the tomiNet. As for if the tomiNet would be immune to the reality that the Dead Internet Theory depicts, I cannot say. But the foundation of the tomiNet lies in its users deciding the rules of this new, alternative internet. We might be driven into a reality where the majority would be in favour of banning AI-generated content to some extent. Again, I’m not saying that we will, but this is the beauty of community governance. We have the power to change and adapt according to our community’s needs as the tomiNet develops.

With AI altering the fabric of the internet, our vigilance is paramount. Will we build a sanctuary of authenticity, or plunge deeper into a synthetic abyss? As the internet evolves, one thing is clear: this ever-changing digital world will never be the same.

Follow us for the latest information:

Website | Twitter | Discord | Telegram Announcements | Telegram Chat | Medium | Reddit | TikTok | YouTube

--

--