What if anyone could contribute and make a difference in fighting misinformation?

John Marcom
Gigafact
Published in
6 min readMar 28, 2023

Yu Ding, a PhD candidate at Columbia Business School, notes the volume of misinformation has continued to outpace the efforts of the expanding group of fact-checking organizations around the world.

Could ordinary people help? His research, outlined in a working paper co-authored with his advisor, Columbia professor Gita Johar, finds that if most people are asked to assess the veracity (i.e., truth/falsehood) of a news article, it’s hard to avoid the built-in biases and assumptions that most everyone carries. That skews the results.

If, instead, a reader is asked to read two articles side-by-side and compare the similarity of their contents, the question avoids triggering their prior assumptions and beliefs — thus getting a much fairer, more objective assessment. Those kinds of similarity ratings, applied across dozens of articles, would be helpful in sorting through the news to identify the more credible articles. Their work demonstrates that this method works across multiple experiments. It delves into the details of how these assessments could augment existing efforts, and points to some promising directions for future research and implementation.

Gigafact Foundation’s John Marcom asked Yu and Gita more about what they have learned.

Yu: It does seem that, despite increased efforts and collaboration, there will never be enough experts and professionals to fact-check all the news. That makes us think that we definitely could use help from lay consumers.

Of course, there are plenty of consumers who read news, and many could be motivated to help make it better. But people bring in biases when judging the veracity of news, based on their prior beliefs — what researchers have called motivated reasoning. So it’s hard to know each lay consumer’s fact-checking accuracy.

Some previous research proposed using a politically balanced crowd. But a balanced crowd may not provide accurate responses, either.

There might be a way to recruit specific subgroups of crowds who can rate one topic more accurately. But researchers or policy-makers do not know the specific crowd-based factors that can bias veracity judgements for every topic. For example, would the crowd’s veracity rating on news headlines about COVID-19 vaccination be biased by their religious beliefs, political ideologies or their vaccination status?

Gita: When you‘re talking about crowdsourcing reviews of products or services, it‘s one thing. Those are subjective opinions. When you have some objective criteria, then it becomes really tricky, given the problems with motivated reasoning. I don‘t think there‘s been much success yet in successfully leveraging the crowd to do those things.

Gita Johar, Columbia Business School

Q: So asking people to rate similarity, as opposed to veracity, helps?

Yu: It can. As I said earlier, when asking for a veracity judgment, it‘s belief-related, so that‘s very close to people‘s identity and their core beliefs. They tend to believe things they only want to believe in, rather than give us an objective rating of the news. However, when asking people to rate similarity between two news articles, or an argument in two news articles, people will ask themselves, “Well, do they really share the same argument or a different argument?”

So that helps reduce the bias and give us a more reliable, high-consensus rating across people of different backgrounds and prior beliefs. As long as one of the articles is rated on veracity by experts, we can predict the veracity of the second article using the crowd’s similarity ratings.

Gita: The whole method of course needs a lot of validation. But the starting point is that you can establish a distinction between these two types of judgments. With articles about, say, climate change, the effect of political affiliation on veracity judgments is strong. But while a veracity judgment is correlated with ideology in different ways for different topics, the similarity judgment is not.

Q: Looking ahead, how do you think your findings might lead to practical steps for news providers or fact-checking organizations?

Yu: Both third-party fact-checkers (such as Gigafact contributors, or PolitiFact) and news platforms (such as Twitter or Facebook) could use our model by offering their users a chance to rate news similarity. It could take the form of a pop-up (after one reader has read two articles on the same topic) to ask for a similarity rating, or a request (after one reader reads one article) to submit another article that shares a similar or different argument.

I think there could be many other creative ways for companies and platforms to test and employ similarity judgments, so as to both engage lay consumers in the fact-checking task as well as to provide accurate responses.

Yu Ding, Columbia Business School

Q: Why might people be interested in helping out this way?

Gita: I see Wikipedia as a nice analogy. People do it, and they do it out of interest. Maybe you pick the topics you’re interested in, then you contribute to those particular topics you’re interested in, so similarly one could think about recruiting people based on their interests. If you are interested in climate change, you can join the citizen army in order to contribute, to make sure that the information out there is accurate. If you are interested in public health, then you can join with people who are interested in making sure that the public health information ecosystem is cleaner.

On many things, you hear people saying, “What can I do to make a difference?” This could be a concrete way in which every single person could contribute and make a difference. And when people contribute, make a difference, and feel like they’re making a difference, it gives them a sense of purpose. In the end, I think it increases their trust in both the information, and also more generally.

Yu: If we can invite more consumers to participate in fact-checking processes, their trust in fact-checked results should increase through that engagement. They should come to know that the results are reviewed by people who are similar to them — rather than purely checked by some experts they never know.

More participation in the process could also lead to consumers being more cautious about the content they share online, so hopefully they will share less misinformation. Having more participants could help create more news consumers who feel empowered, and increase the level of vigilance throughout the ecosystem.

Q: I understand Repustar (which created the concepts employed by Gigafact) played at least a minor role in spurring your thinking about this topic.

Gita: I’ve been interested in misinformation for many, many years. When the whole fake news phenomenon blew up around the time of the 2016 elections, I got really interested in studying why people don’t fact-check.

Yu: We were already thinking about this topic when one day in New York we met Chandran [Sankaran, Repustar’s founder and CEO] over coffee. He shared with us what Repustar was trying to do, and explained his thoughts on how comparing similar news stories could prove helpful.

That conversation inspired us to validate a way that sidesteps what we knew about biases and judgments in identifying falsehoods. If two articles have similar (vs. different) arguments, they should also share a similar level of truthfulness or falsehood, so similarity can help scale up fact-checking efforts.

We got our first set of news articles from Repustar (which ran the project that led to the founding of Gigafact), and we used those articles to run our very first two experiments. Repustar and now Gigafact have been a great partner and we look forward to continuing to work with the team to implement some of these ideas.

Portions of this conversation have been condensed and edited for clarity.

--

--

John Marcom
Gigafact

VP Content, Yahoo Finance. First a reporter (WSJ, Forbes), then the biz side ( Time, FT, Yahoo, Future PLC). Cofounder, Gigafact Foundation.. Big fan of facts.