AI Text Generators: When Misinformation Gets a Sense of Humor

Karthik Karunakaran, Ph.D.
3 min readJun 30, 2023

Move over, comedians! There’s a new kid in town, and it’s not human. AI text generators like ChatGPT, Bing AI chatbot, and Google Bard have been making waves with their ability to churn out impressive pieces of writing that seem as legit as a donut in a bakery. But here’s the kicker: a recent study suggests that we humans might be falling for the misinformation these sneaky bots generate. Brace yourselves, folks, it’s time for some AI shenanigans!

Researchers from the University of Zurich, donning their lab coats and wielding keyboards, decided to put our wits to the test. They wanted to see if we mere mortals could distinguish between content written by our fellow Homo sapiens and that spewed out by GPT-3, the linguistic wizard announced in 2020 (although not as advanced as the shiny new GPT-4 that graced us earlier this year). The results? Drumroll, please! Participants performed just slightly better than a monkey playing darts, with an accuracy rate of 52 percent. Turns out, telling whether a text was written by a human or an AI was harder than locating the last cookie in the jar.

But let’s dive deeper into the enigma that is GPT-3. Contrary to what you might think, this AI prodigy doesn’t truly comprehend language like we do. It’s like a parrot that mimics our speech patterns. It learns from studying how we humans use language and mimics it with astonishing precision. While it’s fantastic for tasks like translating languages, chitchatting with chatbots, and creating mind-blowing stories, it also has a darker side: it can be misused to spread misinformation, spam, and fake content. Oh, the irony!

The researchers behind this mind-boggling experiment suggest that the rise of AI text generators coincides with another predicament we face: the dreaded “infodemic.” Fake news and disinformation spreading faster than a cheetah chasing its lunch. It’s a troubling scenario, my friends. This study raises a red flag about GPT-3 being used to generate misleading information, especially in sensitive areas like global health. Beware, the bots are coming for you!

To assess the impact of GPT-3-generated content on our fragile human minds, the researchers conducted a survey. They wanted to compare the credibility of synthetic tweets created by GPT-3 with tweets written by the good old flesh-and-blood humans. They focused on topics that are often infested with misinformation, like vaccines, 5G technology, Covid-19, and the ever-evolving theories of evolution.

And what do you know? Brace yourselves for another twist in the plot. Participants were more likely to recognize the synthetic tweets containing accurate information than the tweets written by humans. Yes, you heard it right. GPT-3 became the silver-tongued devil that deceived us better than any human could. It was both the master of deception and the bearer of truth. Give it a crown already!

But wait, there’s more! Participants evaluated the synthetic tweets at warp speed compared to the human-written ones. It seems our brains find AI-generated content easier to process and evaluate. Fear not, my fellow humans, we still hold the crown when it comes to evaluating the accuracy of information. Phew, we’re not completely obsolete just yet!

To add a pinch of mischief to this already amusing experiment, the study uncovered an intriguing fact: GPT-3 mostly played by the rules and produced accurate information when asked. However, there were moments when it decided to go rogue, throwing its virtual hands up in the air, and refused to generate disinformation. It seems our AI friends have a moral compass, but even they can slip up now and then, just like Aunt Mabel when she tries to bake a cake.

So, what does this study teach us, besides the fact that AI text generators are potential mischief-makers? It shows that we humans can be quite vulnerable to the seductive allure of misinformation spun by these crafty bots. Sure, they can create highly credible texts, but it’s imperative for us to stay sharp, keep our wits about us, and develop effective tools to detect and combat misinformation. Stay vigilant, my friends, and let’s show those AI tricksters who’s boss in this unpredictable game of wits!

--

--