Here’s the dark side to ChatGPT: billions of fake social network accounts in the hands of bad actors

Enrique Dans
Enrique Dans
Published in
3 min readMar 25, 2023

--

IMAGE: A drawing of a hand holding a smartphone and a bot replying in a chat
IMAGE: Mohamed Hassan — Pixabay

The BBC has raised a question that has been a concern for some time: mass language models (LLMs) like ChatGPT are being used by bot factories to create misinformation cheaply and easily on social networks and spread through fake accounts.

I’ve already experienced this: every time I update my Twitter account, a couple of bots reply to it with some scam. Every. Damn. Time. In my case, that means I have to waste time reporting those accounts, explaining that they’re automated spam bots, and blocking them… which serves no purpose. Only by tracking the hundreds of accounts I’ve been blocking for months could an algorithm be trained to learn to recognize them and block them at the root. But as before with Twitter, nothing is being done about it.

It will come as no surprise to learn that there is apparently no dedicated team at Twitter to deal with this. Despite claiming to delete more than one million spam accounts a day and having created a specific way to report bots, Twitter seems to be doing nothing to stop the mass production of fake accounts that spews thousands of messages promoting scams on the internet every day. The fact that Elon Musk himself claims that 90% of comments on Twitter are made by bots is hardly a ringing endorsement that the root and…

--

--

Enrique Dans
Enrique Dans

Professor of Innovation at IE Business School and blogger (in English here and in Spanish at enriquedans.com)