Why Fact-checking Alone Won’t Stop Fake News

When nudged, people will share accurate information online, but ideology still influences behavior

MIT IDE
MIT Initiative on the Digital Economy
4 min readOct 15, 2020

--

Getty Images

By Paula Klein

Why can’t we easily stop the spread of misinformation and fake news online? It’s a simple, critical, and timely question with complex answers that were discussed at a recent MIT IDE seminar presented by MIT Sloan Professor David Rand.

The topic is acute in the current political climate — calls for breaking up large social media platforms are taking place on Capitol Hill, while Facebook and Twitter are self-enforcing anti-hate speech rules on their sites. But Rand has studied why people believe and share misinformation and “fake news” for several years. He focuses on understanding political psychology and polarization, and he bridges the fields of cognitive science, behavioral economics, and social psychology.

At the IDE seminar, Rand began by explaining why two of the most common interventions against misinformation — warnings based on professional fact-checking, and emphasizing the source of information — fall short.

First, he discussed ways that social media platforms like Twitter and Facebook use tech solutions and algorithms to identify misinformation, “but that’s not enough.” When using machine learning algorithms to look for falsehoods, the training sets are difficult to define and to keep current.

“Truth is not a simple, well-defined concept and that makes it hard to train models.” A human layer of professional fact-checking is helpful, but it is laborious and time consuming to scale, he said.

Second, Rand said that emphasizing the publisher of the information, “is surprisingly ineffective because untrusted outlets typically produce headlines that are judged as inaccurate even without knowing the source.” In other words, some news is questioned regardless of the source.

Nudges that Work

Rand spent the bulk of his IDE seminar discussing research that has uncovered two successful alternatives to keep fake news in check: Nudging social media users to think about accuracy, and using crowdsourcing to identify misinformation.

Rand posted a working paper last year showing that when social media users were nudged to consider the concept of accuracy (by being asked to rate the accuracy of a random headline), it increased the quality of the political news that they subsequently shared. The study was based on survey experiments as well as a field experiment with more than 5,000 Twitter users who previously retweeted Breitbart links. In subsequent work, Rand and his collaborators found similar results for COVID-19 misinformation.

He also explained that when crowdsourcing is used to identify untrustworthy sources or inaccurate articles, laypeople produce judgments that are highly aligned with professional fact-checkers.

Rand’s team used cognitive science to determine why people share misinformation in the first place. At the seminar, he said the researchers wanted to know if “in a ‘post-truth world’ people don’t care about accuracy ” (spoiler: that’s not the case). They conducted experiments to test the idea.

Results were nuanced and indicate that most users didn’t spend time thinking about the accuracy of what they shared online, even though they did have a strong desire not to share inaccurate content. Despite being able to assess accuracy when directly asked, “veracity didn’t align with sharing,” according to the research. In fact, participants in a survey experiment were twice as likely to share headlines that aligned with their ideology, but were not more likely to share true headlines compared to false headlines, Rand said.

Once nudged to consider accuracy, however, users in the study were more three times more discerning about their sharing.

The field experiment sent messages to more than 5,000 Twitter users and saw “robust positive effects on the quality of shared links” after the messages were received.

“Via network effects, the impact of nudges spreads to followers, which can amplify the treatment’s effect,” according to Rand. “It’s a simple intervention that’s easy to use” and could easily be deployed by social media platforms and ads, he said. There are many advantages to this distributed solution including scalability and user involvement.

At the same time, Rand distinguished between a nudge and a shove. The comment or request needs to be subtle and not accusatory or confrontational. “It should not say you did something wrong, but that you should be paying attention. It’s more innocuous; otherwise, it could backfire.”

Rand concludes that professional fact-checkers alone are insufficient to monitor all sites for accuracy. Scalable solutions — such as making users more discerning and using crowdsourcing to help identify low-quality sites — are promising steps in the right direction.

Read more about this research on MIT Sloan here

Learn more about David Rand and his research here

--

--

MIT IDE
MIT Initiative on the Digital Economy

Addressing one of the most critical issues of our time: the impact of digital technology on businesses, the economy, and society.