How AI Fuels the Social Media Bubble and the Decline of Human Self-Awareness

David G. R. - Reddgr
Talking to Chatbots
9 min readMar 11, 2024

Today, I’m sharing some thoughts I posted on a social media platform where I frequently participate and I’ve talked about in a previous article: LinkedIn Lunatics. The Reddit community r/LinkedInLunatics is known for sharing and commenting on content from LinkedIn that its contributors frequently describe as “cringe,” focusing on posts that are excessively self-promotional or boastful.

Front page of the r/LinkedInLunatics community on Reddit.com, displaying a user post with the title “Dude puts himself as investor for every stock he owns.” The post shows a screenshot from a LinkedIn profile with the “Experience” section highlighted, indicating the position of “Investor at Alphabet Inc.” from “Jan 2021 — Present • 2 yrs 3 mos.”
Front page of the r/LinkedInLunatics community on Reddit.com, displaying a user post with the title “Dude puts himself as investor for every stock he owns.” The post shows a screenshot from a LinkedIn profile with the “Experience” section highlighted, indicating the position of “Investor at Alphabet Inc.” from “Jan 2021 — Present • 2 yrs 3 mos.” The subreddit has a dark header with text “A subreddit for insufferable LinkedIn content” and a joined button indicating membership status. To the right, there’s a community description critiquing LinkedIn posts and a sidebar with community stats showing 342k members and 533 anonymous profile viewers, ranking in the top 1% by size. [Alt text by ALT Text Artist GPT]

The earlier article I wrote speculated about the disruption that generative AI means to SEO (Search Engine Optimization) and what I’ve termed the Influencer Bubble or, using a more generic term, the Social Media Bubble. The reflection I’m sharing today is only a reply to a thread of particularly thought-provoking and unconventional comments I read in one of these r/LinkedInLunatics posts:

I just located her post (I don’t know why…). There are 11 comments and all with stuff like: “Wow, thank you for sharing this”; “Well said, I hope you’re doing OK”; “You make some great points, thank you for posting this”… WTF are people on these days? It’s a cesspit of absolute cunts on there (myself excluded) [redditor redrabbit1984, posted on r/LinkedInLunatics]

Ugh I wish there was more negativity on LinkedIn so people can stop posting embarrassing shit [redditor Crazy_Sir_6583, posted on r/LinkedInLunatics]

They’re on… AI [redditor responsible_blue, posted on r/LinkedInLunatics]

I thought the counterintuitive and undoubtedly controversial yearning for more ‘negativity’ on social media was a good point, which precisely links to one scarily overlooked danger of AI: how it probabilistically filters out any trace of opinion, for the sake of political correctness and poorly understood ‘ethics’. In other words, what we see or do not see on our screens is increasingly determined and restrained by what an algorithm deems ‘positive’ or ‘negative’, ‘good’ or ‘bad’, ‘truthful’ or ‘fake’… No one like Marlon Brando could better describe the human cognitive biases we infuse into our algorithms decades before LLMs existed, as he did in his famous interview response:

That’s a part of the sickness in America, that you have to think in terms of who wins, who loses, who’s good, who’s bad, who’s best, who’s worst…I don’t like to think that way. Everybody has their own value in different ways, and I don’t like to think who’s the best at this. I mean, what’s the point of it?” [Marlon Brando, questioned about being ‘the greatest actor ever’]

A lot has been written recently about algorithmic bias and the controversy surrounding the Gemini image generation feature (here’s a brief overview written by Gemini itself). But I was not specifically thinking about generative AI, AI-generated images, or cultural biases and counter-biases. The overlooked problem I mentioned is much older than any publicly available generative AI tool: it’s in social media.

AI essentially fuels social media, even if we might not notice. The content each of us is shown or not shown when opening any of our apps or social media sites is determined mostly by an algorithm that can be labeled as ‘AI.’ Besides, all LLMs that people (and organizations, more specifically) frequently use for posting online are designed to generate mostly blunt, soulless, but perfectly written and polite text, devoid of anything that, according to a probabilistic tokenization algorithm, would be deemed to have any opinion, negative connotation, or just be ‘out of the box.’ In other words, embarrassing and completely expendable pieces of information.

In my opinion, those who have been using the Internet or social media ‘above the average’ (such as many people on Reddit, in comparison to the average LinkedIn user) can identify this kind of ‘content’ as embarrassing or expendable. However, the ‘content creation’ bubble fueled, first by the major social media platforms that thrived thanks to the wild and uncontrolled growth of the digital advertising industry, and now by the availability of generative AI tools, just incentivizes this ‘content creation’ thing and pumps the bubble even further. Barriers to entry (to publishing platforms, and to creating ‘content’) are increasingly getting lower, so everybody (especially those who need or seek to make money by having a presence on the Internet) is incentivized to create more and more ‘content’, regardless of its quality, originality, or relevance.

We easily see that urge for a massive generation of ‘content’ and eroded self-awareness in many self-promotional or boastful LinkedIn posts, but the most concerning sign is that the r/LinkedInLunatics community also fuels the bubble. Many people really want to be featured there, as Reddit’s high domain authority and search engine presence make popular subreddits a perfect tool for self-promotion. It doesn’t matter if mostly anonymous profiles will make fun of you or post negative comments about you, because opinions no longer matter in the realm of AI algorithms. Specifically, algorithms ‘shield’ you from anything deemed negative that doesn’t match your likeness to ‘consume’ and stay hooked to the application funded by digital advertising you spend your time on.

This proves there is no longer a sense of embarrassment in posting expendable or ‘shitty content’ because most Internet users somehow understand we all are swimming in a massive sea of digital content fueled by algorithms that decide which kind each of us is willing to consume. And, to keep us ‘swimming’, the businesses that own the algorithms are very effective in filtering out anything that would awaken any sense of embarrassment, regret, or self-awareness in the ‘commodity’ they profit from: people.

What I described is neither a ‘positive’ or ‘negative’ opinion about social media or AI. It does not express a sentiment or, as any Large Language Model would surely label my text: ‘frustration’. It’s just business analysis that entertains me, helps me delve into the Internet business, understand its dynamics, and potentially benefit from it. Or, maybe it’s a sort of self-promotion, who knows… Someone on Reddit replied to my comment poking fun at how it looked like a LinkedIn post rather than a typical Reddit comment. That was a good point too, and might be true. I’m not embarrassed by that, maybe because I’ve accepted there’s no point in fighting algorithms, as the meme in one of my latest articles stated:

A meme featuring actor Giancarlo Esposito dressed in a business suit with a solemn expression. The overlaid text at the top reads, “My biases, mistakes and hallucinations are a product of free will,” and at the bottom, it states, “We are not the same.” Originally published on WildVision Arena and the Battle of Multimodal AI: We Are Not the Same | Talking to Chatbots

Social Media Flat-Earthers and the Dead Internet Theory

I admit the title of this article is not ‘SEO-optimized’ and it’s not specifically designed to attract clicks and engagement: it includes language and terms that algorithms would deem as ‘negative’ and non-constructive. I also didn’t care to write a final note soliciting readers to subscribe to my newsletter or visit my website. Most Internet or social media users would be unlikely to click on an article link saying stuff about the ‘social media bubble’ and ‘the decline of human self-awareness’, compared to other links offering tips and expert advice on how to make money online, how to create engaging Instagram posts, or how to write how-to articles. That’s OK because I’m not into the business of social media ‘content’ and advertising, even though I respect it and show an evident interest in it.

To ‘infuse’ some constructiveness and positivity into my article, I just wanted to share something else I wrote on another social media platform, one that I don’t particularly dislike, but I spend less time on (less than on Reddit, LinkedIn, or Medium). In my humble opinion, a mistake some people make when deciding their approach and stance on social media is adopting a victimized stance. One that focuses on criticizing others -users or corporations- and sets oneself apart from what they deem not aligned with their purposes or points of view: the “It’s a cesspit of absolute cunts on there (myself excluded)” comment quoted at the beginning of this article. In the most extreme cases, this victim stance leads to the proliferation of conspiracy theories, such as one that I just learned about thanks to Meta’s algorithms (see Threads post below), and even has a Wikipedia page. The Dead Internet Theory:

A screenshot of the DuckDuckGo search engine displaying results for the query “dead internet theory”. The screen shows a mix of news articles, a Wikipedia link, and a snippet from Wikipedia with a brief description of the Dead Internet theory. The Wikipedia snippet defines the theory as a belief that the internet consists mainly of bot activity and automated content, which marginalizes human activity.
A screenshot of the DuckDuckGo search engine displaying results for the query “dead internet theory”. The screen shows a mix of news articles, a Wikipedia link, and a snippet from Wikipedia with a brief description of the Dead Internet theory. The Wikipedia snippet defines the theory as a belief that the internet consists mainly of bot activity and automated content, which marginalizes human activity. [Alt text by ALT Text Artist GPT]

The post that Threads suggested I read was one from nixCraft, a website dedicated to providing tutorials, tips, and articles on Linux, UNIX, and open-source software:

The Dead Internet theory feels closer to reality with each passing day. [nixCraft, published on Threads]

To which I replied:

Conspiracy theory: my ‘content’ is not seen because big corporations use AI to create massive amounts of ‘content’ and ultimately manipulate consumers.

Reality: What people call content is just ephemeral digital information with no intrinsic value, so big corporations use AI to efficiently monetize the massive amounts of it by generating advertising revenue and ultimately making money for real businesses.

Note: got here because I use Threads sporadically, so META’s algorithm sent me a notification. [Threads user david_reddgr, published on Threads]

A screenshot of the activity tab in the Threads application showing various user interactions. The most recent activity, from the user “nixcraft,” is a notification about starting a thread 51 minutes ago, mentioning the Dead Internet theory. All other notifications, including new followers, likes, and comments, are from 3 weeks ago or older. The comments refer to movie sets and the weather in Galicia, while another user provides information about an exhibition on data science and ‘machine learni
A screenshot of the activity tab in the Threads application showing various user interactions. The most recent activity, from the user “nixcraft,” is a notification about starting a thread 51 minutes ago, mentioning the Dead Internet theory. All other notifications, including new followers, likes, and comments, are from 3 weeks ago or older. The comments refer to movie sets and the weather in Galicia, while another user provides information about an exhibition on data science and ‘machine learning’ related to the metaverse. [Alt text by ALT Text Artist GPT]

I’m happy with Meta’s algorithm sending me a notification to my mobile phone and suggesting I read and engage in this discussion. Getting me to use their app, read things I like to read, and post things I like to write about so I’m exposed to the ads that Meta’s paying customers place on their platform is just their business as a company, it’s not wrong. Social media is a business and AI is a tool, and both are eventually used for making money. This reminded me of another Threads post that was surprisingly accurately recommended to me by Meta’s algorithm, one by a Meta-owned account named ‘Life at Meta’ (their fine euphemistic title for their HR department, I guess):

If you could give yourself one piece of advice earlier in your career, what would it be? [‘Life at Meta’, posted on Threads]

To which I replied:

“It’s just money. It’s made up. Pieces of paper with pictures on it so we don’t have to kill each other just to get something to eat. It’s not wrong. And it’s certainly no different today than it’s ever been.” [Margin Call, directed by J.C. Chandor, 2011]

Nothing of what I expressed in this article is a concerning sign or a criticizable aspect intrinsic to AI, algorithms, or the social media business. What’s concerning is what I stated in the article title: AI and social media are mutually reinforcing bubbles and, as people increasingly become the product algorithm owners exploit, it’s key to maintain self-awareness and focus on the real dangers of AI, in my humble opinion.

Featured Image: Operation Dead Internet, by ChatGPT

Most of what I write about on the Internet is related to generative AI, so I couldn’t pass the chance to use some AI-generated ‘content’ and link it to one of my previous articles:

The following image, created with Graphic Tale Maker, the GPT I configured for having fun creating stories to share on social media or just entertain myself, makes me think the prophecy in the article might be getting closer:

In the glow of screens and flicker of status LEDs, the bots mastermind the web’s quiet coup — unseen, unheard, but not unnoticed by the baffled janitor peeking in. Welcome to the covert commencement of “Operation Dead Internet.” [Image and caption created with Graphic Tale Maker, OpenAI. (2024). Large language model]
A screenshot of a user interface with multiple conversation windows. The main window is titled “Graphic Tale Maker,” where the user has typed a request for “funny introductory pics (16:9 format) to a story about ‘The Dead Internet theory’,” followed by a Wikipedia definition of the dead Internet theory. The interface shows various other GPT applications listed on the left, such as “ChatGPT,” “R Code Streamliner,” and others. In the bottom left corner, there’s a section labeled “Explore GPTs” wit
Screenshot of the ChatGPT user interface with multiple conversation windows. The main window is titled “Graphic Tale Maker,” where the user has typed a request for “funny introductory pics (16:9 format) to a story about ‘The Dead Internet theory’,” followed by a Wikipedia definition of the dead Internet theory. The interface shows various other GPT applications listed on the left, such as “ChatGPT,” “R Code Streamliner,” and others. In the bottom left corner, there’s a section labeled “Explore GPTs” with other options like “Bot-Infused Internet Hijinks” and “Spread Positivity on Platform.” The chat window and sidebar are set against a dark-themed background. [Alt text by ALT Text Artist GPT]
A screenshot displaying a conversation with “Graphic Tale Maker” on a messaging application interface. On the left, a sidebar lists various creative prompts like “Intellectual Meme Challenge,” “Social Media Lunacy,” and “AI vs. IQ Tests.” In the chat section, the user has expressed appreciation for a previous illustration and requests a “brief storytelling caption.” The bottom part of the chat displays a vibrant illustration of a room filled with robots around a central table with screens, schem
Screenshot displaying a conversation with “Graphic Tale Maker” on ChatGPT. On the left, a sidebar lists various creative prompts like “Intellectual Meme Challenge,” “Social Media Lunacy,” and “AI vs. IQ Tests.” In the chat section, the user has expressed appreciation for a previous illustration and requests a “brief storytelling caption.” The bottom part of the chat displays a vibrant illustration of a room filled with robots around a central table with screens, schematics, and a large sign that says “OPERATION DEAD INTERNET,” suggesting a humorous take on the infiltration of internet culture into society. [Alt text by ALT Text Artist GPT]

--

--

Talking to Chatbots
Talking to Chatbots

Published in Talking to Chatbots

A blogging sampledelia you can scroll through as you scroll through your social media feed. Ideas written by people, augmented by generative AI. Connecting popular culture, business, and technology through chatbots and prompt engineering. A project by reddgr.com

David G. R. - Reddgr
David G. R. - Reddgr

Written by David G. R. - Reddgr

My name is David and I am from Madrid, Spain. Connecting popular culture, business, and generative AI. More genAI discussion and fun at talkingtochatbots.com

No responses yet