Illustration via Joe Amditis.

Guide: Misinfo stories you can report right now — and how to find them — with a new dashboard from the News Literacy Project

By Dan Evon, Senior Education Design Manager for the News Literacy Project

--

With so much misinformation swirling around the 2024 election, it can be difficult to decide what to write about. Meanwhile, research shows that prebunking — or helping people identify themes and tactics used to spread misinformation — can be an effective way to help people avoid being misled. The News Literacy Project has a resource that reporters, researchers, and academics can use to identify misinformation trends and alert audiences.

The News Literacy Project’s Misinformation Dashboard: Election 2024 catalogs false online rumors into two broad categories: The type of misinformation that is spreading and the subject (or theme) of that particular false claim. Within these broad categories, the examples are broken down a step further into more specific classifications. In the Fabricated Content type category, for example, you’ll find Tactics such as Fabricated Images and Sheer Assertions. In the Candidate Image theme, you’ll find narratives related to claims that distort a candidate’s fitness for the job or portray them as an authoritarian dictator.

This categorization system was designed with the goal of identifying the falsehoods that are being repeated about the candidates online, as well as the methods of misinformation that are being used to spread those false claims. A public view of NLP’s database can be seen here.

But we are also making our Airtable available for reporters and researchers (password: NLPDB2024). With this table, you will have filtering options that aren’t available to the general public and that allow for original research and reporting.

Black button with white text that reads: “ACCESS THE DASHBOARD.” The password is NLPDB2024
The password is NLPDB2024.

Find out what’s trending

The Airtable’s “Group” feature allows users to customize the view of the database for an easy way to tally the number of entries into each category.

Say you want to write a story about the most popular tactics being used to spread false claims, for example: Group by type, and then add a subgroup for tactics. Minimizing these categories provides a quick glimpse at how many examples populate each category. Here, we can see that Tricks of Context is the most popular misinformation type and that presenting genuine media in an entirely new, false context is the most popular tactic.

Go for specificity

The same group feature can be used to drill down into the false narratives spreading online.

When we group by theme, we can see that Candidate Image claims, or falsehoods that malign a candidate’s character, appearance, or reputation, are far and away the most common, making up 264 of the 641 total claims in the database, as of the time of this writing.

We can then add a subgroup to further refine our search. Adding a subgroup for Election Narrative provides a tally for all of the subcategories within that theme. This makes it easy to see the most common falsehoods under any given theme.

In Election Integrity, for example, we see that claims about noncitizens voting lead the pack alongside claims of fraud in the 2020 election and a candidate’s eligibility to run.

Consider the role of artificial intelligence

The database also tracks the use of artificial intelligence in the spread of misinformation.

How? Let’s say, for example, you want to see all the claims maligning a candidate’s character that used an AI image. You’d group by theme and narrative, click on Candidate Image, and then add a filter using the AI field. Despite legitimate concerns about the potential for AI to turbocharge misinformation this election season, the far more low-tech method of false context (mentioned above) still dominates.

Each entry in the database is accompanied by an image or video record of the falsehood and a link back to supporting evidence. This media can be easily downloaded and used in any stories you may write. We hope that this collection of misinformation examples provides valuable insight into the sort of content that people are encountering online.

Dan Evon is the lead writer for RumorGuard, a website by the nonpartisan nonprofit News Literacy Project that helps people learn to debunk viral misinformation.

🗳️ Contact the Democracy Day organizing team!

Email info@usdemocracyday.org, sign up via Airtable here, or check out the Democracy Day project page to learn more about what pro-democracy reporting looks like in practice.

About the Center for Cooperative Media: The Center is a primarily grant-funded program of the School of Communication and Media at Montclair State University. Its mission is to grow and strengthen local journalism and support an informed society in New Jersey and beyond. The Center is supported with funding from Montclair State University, Robert Wood Johnson Foundation, Geraldine R. Dodge Foundation, Democracy Fund, the New Jersey Civic Information Consortium, the Independence Public Media Foundation, Rita Allen Foundation, Inasmuch Foundation and John S. and James L. Knight Foundation. For more information, visit centerforcooperativemedia.org.

--

--

Beatrice Forman
Center for Cooperative Media

Aspiring journalist first, recovering Swiftie second. Writing about diversity in tech & entrepreneurship, consumer trends, and all things pop culture.