Researchers Find Ways to Avoid Misinformation Online

Sharing fake news online isn’t as intentional as you may think

By Peter Dizikes, MIT News Office

Stopping the spread of political misinformation on social media may seem like an impossible task. But a new study co-authored by MIT scholars finds that most people who share false news stories online do so unintentionally, and that their sharing habits can be modified through reminders about accuracy.

When such reminders are displayed, it can increase the gap between the percentage of true news stories and false news stories that people share online, as shown in online experiments that the researchers developed.

“Getting people to think about accuracy makes them more discerning in their sharing, regardless of ideology,” says MIT Professor David Rand, co-author of a newly published paper detailing the results. “And it translates into a scalable and easily implementable intervention for social media platforms.”

The study also indicates why people share false information online. Among those who shared a set of false news stories used in the study, around half did so because of inattention related to the hasty way people use social media; another 33 percent were mistaken about the accuracy of the news they saw and shared it because they (incorrectly) thought it was true. Only about 16 percent knowingly shared false news headlines.

“Our results suggest that the large majority of people across the ideological spectrum want to share only accurate content,”

says Rand, the Erwin H. Schell Professor at the MIT Sloan School of Management and director of MIT Sloan’s Human Cooperation Laboratory and Applied Cooperation Team. “It’s not like most people are just saying, ‘I know this is false and I don’t care.’ ”

MIT Professor David Rand

Inattention, Confusion, or Political Motivates?

The paper, “Shifting attention to accuracy can reduce misinformation online,” was published March 17 in the journal, Nature. In addition to Rand, the co-authors are Gordon Pennycook, an assistant professor at the University of Regina; Ziv Epstein, a PhD candidate at the MIT Media Lab; Mohsen Mosleh, a lecturer at the University of Exeter Business School and a research affiliate at MIT Sloan; Antonio Arechar, a research associate at MIT Sloan; and Dean Eckles, the Mitsubishi Career Development Professor and an associate professor of marketing at MIT Sloan.

Observers have offered different ideas to explain why people spread false news content online. One interpretation is that people share false material for partisan gain, or to gain attention; another view is that people accidentally share inaccurate stories because they are confused.

The authors advance a third possibility: inattention and the simple failure to stop and think about accuracy.

The study consists of multiple experiments, using more than 5,000 survey respondents from the U.S., as well as a field experiment conducted on Twitter. The first survey experiment asked 1,015 participants to rate the accuracy of 36 news stories (based on the headline, first sentence, and an image), and to say if they would share those items on social media. Half of the news items were true and half were false; half were favorable to Democrats and half were favorable to Republicans.

Overall, respondents considered sharing news items that were false but aligned with their views 37.4 percent of the time, even though they considered such headlines to be accurate just 18.2 percent of the time. And yet, at the end or the survey, a large majority of the experiment’s participants said accuracy was very important when it comes to sharing news online.

But if people are being honest about valuing accuracy, why do they share so many false stories? The study’s balance of evidence points to inattention and a knowledge deficit, not deception.

For instance, in a second experiment with 1,507 participants, the researchers examined the effect of shifting users’ attention toward the concept of accuracy. Before deciding whether they would share political news headlines, half of the participants were asked to rate the accuracy of a random nonpolitical headline — thereby emphasizing the concept of accuracy from the outset.

Participants who did not do the initial accuracy rating task said they were likely to share about 33 percent of true stories and 28 percent of false ones. But those who were given an initial accuracy reminder said they would share 34 percent of true stories and 22 percent of the false ones. Two more experiments replicated these results using other headlines and a more representative sample of the U.S. population.

Why Share False News?

To test whether these results could be applied on social media, the researchers conducted a field experiment on Twitter. “We created a set of bot accounts and sent messages to 5,379 Twitter users who regularly shared links to misinformation sites,” explains Mosleh. “Just like in the survey experiments, the message asked whether a random nonpolitical headline was accurate, to get users thinking about the concept of accuracy.”

The researchers found that after reading the [accuracy] message, the users shared news from higher-quality news sites, as judged by professional fact-checkers.

A final follow-up experiment, with 710 respondents, shed light on the nagging question of why people share false news. Instead of just deciding whether to share news headlines or not, the participants were asked to explicitly assess the accuracy of each story first. After doing that, the percentage of false stories that participants were willing to share dropped from about 30 percent to 15 percent.

Because that figure dropped in half, the researchers could conclude that half of the previously shared false headlines had been shared because of simple inattention to accuracy. And about a third of the shared false headlines were believed to be true by participants — meaning about 33 percent of the misinformation was spread due to confusion about accuracy.

Moving Toward a Remedy

The remaining 16 percent of the false news items were shared even though the respondents recognized them as being false. This small minority of cases represents the high-profile, “post-truth” type of purposeful sharing of misinformation.

“Our results suggest that in general, people are doing the best they can to spread accurate information,” Epstein says.

“But the current design of social media environments, which can prioritize engagement and user retention over accuracy, stacks the deck against them.”

Still, the scholars think, their results shows that some simple remedies are available to the social media platforms.

“A prescription is to occasionally put content into people’s feeds that primes the concept of accuracy,” Rand says.

“My hope is that this paper will help inspire the platforms to develop these kinds of interventions,” he adds. “Social media companies by design have been focusing people’s attention on engagement. But they don’t have to only pay attention to engagement — you can also do proactive things to refocus users’ attention on accuracy.” The team has been exploring potential applications of this idea in collaboration with researchers at Jigsaw, a Google unit, and hope to do the same with social media companies.

This blog first appeared March 17 here.

Support for the research was provided, in part by the Ethics and Governance of Artificial Intelligence Initiative of the Miami Foundation, the William and Flora Hewlett Foundation, the Omidyar Network, the John Templeton Foundation, the Canadian Institutes of Health Research, and the Social Sciences and Humanities Research Council of Canada.



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store



Addressing one of the most critical issues of our time: the impact of digital technology on businesses, the economy, and society.