AI-Generated Disinformation Poses a Greater Threat, Study Shows

Nagwan Lashin
3 min readJun 29, 2023

--

Photo by Brian McGowan on Unsplash

Recent research conducted by the University of Zurich has revealed that disinformation created by artificial intelligence (AI) may be more convincing than disinformation written by humans.

The study, led by Giovanni Spitale and published in Science Advances, found that individuals were 3% less likely to detect false tweets generated by AI compared to those written by humans.

This worrisome credibility gap raises concerns, as AI-generated disinformation is expected to become increasingly prevalent.

Spitale believes that the introduction of more advanced language models, such as OpenAI’s GPT-4, would exacerbate the disparity in effectiveness between AI-generated and human-written disinformation.

The Study Methodology:

To assess people’s susceptibility to different types of text, the researchers selected common disinformation topics, including climate change and COVID-19.

They then enlisted OpenAI’s language model GPT-3 to generate ten true tweets and ten false ones, while also collecting a random assortment of true and false tweets from Twitter.

The study involved 697 participants who were asked to complete an online quiz in which they had to discern whether the tweets were AI-generated or collected from Twitter, and whether they contained accurate information or disinformation.

The findings indicated that participants were 3% more likely to believe false tweets written by AI than those authored by humans.

Factors Contributing to AI’s Persuasiveness:

The researchers are still uncertain about the reasons behind the increased belief in AI-generated tweets.

Spitale suggests that GPT-3's text tends to be more structured compared to human-written text, yet it is also more condensed, making it easier for individuals to process.

This combination of structured and concise information may contribute to the perceived credibility of AI-generated disinformation.

The Rise of Generative AI and the Risk of Misuse:

The proliferation of powerful and accessible AI tools, such as GPT-3, has made it possible for malicious actors to exploit them for the creation of false narratives quickly and inexpensively.

The ability of these models to generate convincing but incorrect text poses a significant threat to combatting conspiracy theories and disinformation campaigns.

Countermeasures, such as AI text-detection tools, are still in early stages of development and often lack complete accuracy.

OpenAI’s Response:

OpenAI, the organization behind GPT-3, acknowledges the potential for its AI tools to be misused for large-scale disinformation campaigns, despite its policies prohibiting such actions.

In a report published in January, OpenAI stated that it is nearly impossible to prevent the misuse of large language models for generating disinformation. While OpenAI has not yet commented on this specific study, it has cautioned against overestimating the impact of disinformation campaigns.

The company emphasizes the need for further research to identify populations most vulnerable to AI-generated disinformation and to understand the relationship between AI model size and the persuasiveness of its output.

The Importance of Caution:

Jon Roozenbeek, a postdoc researcher specializing in misinformation at the University of Cambridge, urges caution before panicking about the implications of AI-generated disinformation.

Although AI may facilitate the dissemination of disinformation more efficiently and at a lower cost compared to human-staffed troll farms, he points out that moderation on tech platforms and automated detection systems still serve as obstacles to its widespread influence.

Roozenbeek emphasizes that the mere ability of AI to produce slightly more persuasive tweets does not automatically render everyone susceptible to manipulation.

Conclusion:

The study conducted by the University of Zurich demonstrates that AI-generated disinformation poses a greater threat to society than disinformation written by humans.

The findings suggest that individuals are less likely to detect false tweets generated by AI, highlighting the potential for AI-driven disinformation campaigns to deceive and manipulate the public.

As the capabilities of AI models continue to advance, the need for robust countermeasures, improved text-detection tools, and comprehensive research becomes increasingly critical in combatting this growing threat.

--

--

Nagwan Lashin

Muslim, Woman, Chief Chaos Commander of one Husband and two Kids. I write about religion, parenting, life, business and all the hilarious moments in between.