How AI Fuels the Social Media Bubble and the Decline of Human Self-Awareness
Today, I’m sharing some thoughts I posted on a social media platform where I frequently participate and I’ve talked about in a previous article: LinkedIn Lunatics. The Reddit community r/LinkedInLunatics is known for sharing and commenting on content from LinkedIn that its contributors frequently describe as “cringe,” focusing on posts that are excessively self-promotional or boastful.
The earlier article I wrote speculated about the disruption that generative AI means to SEO (Search Engine Optimization) and what I’ve termed the Influencer Bubble or, using a more generic term, the Social Media Bubble. The reflection I’m sharing today is only a reply to a thread of particularly thought-provoking and unconventional comments I read in one of these r/LinkedInLunatics posts:
I just located her post (I don’t know why…). There are 11 comments and all with stuff like: “Wow, thank you for sharing this”; “Well said, I hope you’re doing OK”; “You make some great points, thank you for posting this”… WTF are people on these days? It’s a cesspit of absolute cunts on there (myself excluded) [redditor redrabbit1984, posted on r/LinkedInLunatics]
Ugh I wish there was more negativity on LinkedIn so people can stop posting embarrassing shit [redditor Crazy_Sir_6583, posted on r/LinkedInLunatics]
They’re on… AI [redditor responsible_blue, posted on r/LinkedInLunatics]
I thought the counterintuitive and undoubtedly controversial yearning for more ‘negativity’ on social media was a good point, which precisely links to one scarily overlooked danger of AI: how it probabilistically filters out any trace of opinion, for the sake of political correctness and poorly understood ‘ethics’. In other words, what we see or do not see on our screens is increasingly determined and restrained by what an algorithm deems ‘positive’ or ‘negative’, ‘good’ or ‘bad’, ‘truthful’ or ‘fake’… No one like Marlon Brando could better describe the human cognitive biases we infuse into our algorithms decades before LLMs existed, as he did in his famous interview response:
That’s a part of the sickness in America, that you have to think in terms of who wins, who loses, who’s good, who’s bad, who’s best, who’s worst…I don’t like to think that way. Everybody has their own value in different ways, and I don’t like to think who’s the best at this. I mean, what’s the point of it?” [Marlon Brando, questioned about being ‘the greatest actor ever’]
A lot has been written recently about algorithmic bias and the controversy surrounding the Gemini image generation feature (here’s a brief overview written by Gemini itself). But I was not specifically thinking about generative AI, AI-generated images, or cultural biases and counter-biases. The overlooked problem I mentioned is much older than any publicly available generative AI tool: it’s in social media.
AI essentially fuels social media, even if we might not notice. The content each of us is shown or not shown when opening any of our apps or social media sites is determined mostly by an algorithm that can be labeled as ‘AI.’ Besides, all LLMs that people (and organizations, more specifically) frequently use for posting online are designed to generate mostly blunt, soulless, but perfectly written and polite text, devoid of anything that, according to a probabilistic tokenization algorithm, would be deemed to have any opinion, negative connotation, or just be ‘out of the box.’ In other words, embarrassing and completely expendable pieces of information.
In my opinion, those who have been using the Internet or social media ‘above the average’ (such as many people on Reddit, in comparison to the average LinkedIn user) can identify this kind of ‘content’ as embarrassing or expendable. However, the ‘content creation’ bubble fueled, first by the major social media platforms that thrived thanks to the wild and uncontrolled growth of the digital advertising industry, and now by the availability of generative AI tools, just incentivizes this ‘content creation’ thing and pumps the bubble even further. Barriers to entry (to publishing platforms, and to creating ‘content’) are increasingly getting lower, so everybody (especially those who need or seek to make money by having a presence on the Internet) is incentivized to create more and more ‘content’, regardless of its quality, originality, or relevance.
We easily see that urge for a massive generation of ‘content’ and eroded self-awareness in many self-promotional or boastful LinkedIn posts, but the most concerning sign is that the r/LinkedInLunatics community also fuels the bubble. Many people really want to be featured there, as Reddit’s high domain authority and search engine presence make popular subreddits a perfect tool for self-promotion. It doesn’t matter if mostly anonymous profiles will make fun of you or post negative comments about you, because opinions no longer matter in the realm of AI algorithms. Specifically, algorithms ‘shield’ you from anything deemed negative that doesn’t match your likeness to ‘consume’ and stay hooked to the application funded by digital advertising you spend your time on.
This proves there is no longer a sense of embarrassment in posting expendable or ‘shitty content’ because most Internet users somehow understand we all are swimming in a massive sea of digital content fueled by algorithms that decide which kind each of us is willing to consume. And, to keep us ‘swimming’, the businesses that own the algorithms are very effective in filtering out anything that would awaken any sense of embarrassment, regret, or self-awareness in the ‘commodity’ they profit from: people.
What I described is neither a ‘positive’ or ‘negative’ opinion about social media or AI. It does not express a sentiment or, as any Large Language Model would surely label my text: ‘frustration’. It’s just business analysis that entertains me, helps me delve into the Internet business, understand its dynamics, and potentially benefit from it. Or, maybe it’s a sort of self-promotion, who knows… Someone on Reddit replied to my comment poking fun at how it looked like a LinkedIn post rather than a typical Reddit comment. That was a good point too, and might be true. I’m not embarrassed by that, maybe because I’ve accepted there’s no point in fighting algorithms, as the meme in one of my latest articles stated:
Social Media Flat-Earthers and the Dead Internet Theory
I admit the title of this article is not ‘SEO-optimized’ and it’s not specifically designed to attract clicks and engagement: it includes language and terms that algorithms would deem as ‘negative’ and non-constructive. I also didn’t care to write a final note soliciting readers to subscribe to my newsletter or visit my website. Most Internet or social media users would be unlikely to click on an article link saying stuff about the ‘social media bubble’ and ‘the decline of human self-awareness’, compared to other links offering tips and expert advice on how to make money online, how to create engaging Instagram posts, or how to write how-to articles. That’s OK because I’m not into the business of social media ‘content’ and advertising, even though I respect it and show an evident interest in it.
To ‘infuse’ some constructiveness and positivity into my article, I just wanted to share something else I wrote on another social media platform, one that I don’t particularly dislike, but I spend less time on (less than on Reddit, LinkedIn, or Medium). In my humble opinion, a mistake some people make when deciding their approach and stance on social media is adopting a victimized stance. One that focuses on criticizing others -users or corporations- and sets oneself apart from what they deem not aligned with their purposes or points of view: the “It’s a cesspit of absolute cunts on there (myself excluded)” comment quoted at the beginning of this article. In the most extreme cases, this victim stance leads to the proliferation of conspiracy theories, such as one that I just learned about thanks to Meta’s algorithms (see Threads post below), and even has a Wikipedia page. The Dead Internet Theory:
The post that Threads suggested I read was one from nixCraft, a website dedicated to providing tutorials, tips, and articles on Linux, UNIX, and open-source software:
The Dead Internet theory feels closer to reality with each passing day. [nixCraft, published on Threads]
To which I replied:
Conspiracy theory: my ‘content’ is not seen because big corporations use AI to create massive amounts of ‘content’ and ultimately manipulate consumers.
Reality: What people call content is just ephemeral digital information with no intrinsic value, so big corporations use AI to efficiently monetize the massive amounts of it by generating advertising revenue and ultimately making money for real businesses.
Note: got here because I use Threads sporadically, so META’s algorithm sent me a notification. [Threads user david_reddgr, published on Threads]
I’m happy with Meta’s algorithm sending me a notification to my mobile phone and suggesting I read and engage in this discussion. Getting me to use their app, read things I like to read, and post things I like to write about so I’m exposed to the ads that Meta’s paying customers place on their platform is just their business as a company, it’s not wrong. Social media is a business and AI is a tool, and both are eventually used for making money. This reminded me of another Threads post that was surprisingly accurately recommended to me by Meta’s algorithm, one by a Meta-owned account named ‘Life at Meta’ (their fine euphemistic title for their HR department, I guess):
If you could give yourself one piece of advice earlier in your career, what would it be? [‘Life at Meta’, posted on Threads]
To which I replied:
“It’s just money. It’s made up. Pieces of paper with pictures on it so we don’t have to kill each other just to get something to eat. It’s not wrong. And it’s certainly no different today than it’s ever been.” [Margin Call, directed by J.C. Chandor, 2011]
Nothing of what I expressed in this article is a concerning sign or a criticizable aspect intrinsic to AI, algorithms, or the social media business. What’s concerning is what I stated in the article title: AI and social media are mutually reinforcing bubbles and, as people increasingly become the product algorithm owners exploit, it’s key to maintain self-awareness and focus on the real dangers of AI, in my humble opinion.
Featured Image: Operation Dead Internet, by ChatGPT
Most of what I write about on the Internet is related to generative AI, so I couldn’t pass the chance to use some AI-generated ‘content’ and link it to one of my previous articles:
The following image, created with Graphic Tale Maker, the GPT I configured for having fun creating stories to share on social media or just entertain myself, makes me think the prophecy in the article might be getting closer: