It Only Takes Seconds to Alert Facebook Users to Fake News

Amid a pandemic and looming election, curbing misinformation is urgent.

Texas McCombs
Big Ideas

--

Based on the research of Tricia Moravec

In the Information Age, it’s the abundance of misinformation that worries Texas McCombs researcher Tricia Moravec.

Moravec, an assistant professor of information, risk, and operations management, studies fake news and ways to help Facebook users spot it more easily. In a new paper, Moravec and her co-authors, Antino Kim and Alan R. Dennis of Indiana University, found that two simple interventions, especially when used together, have a strong effect on helping people discern real from fake news.

The first intervention was a stop sign icon. The second was a strong statement, “Declared fake by third party fact-checkers.” Both were effective, but when combined, they were almost twice as powerful in spotlighting fake news.

“This could be a great step in helping that,” Moravec says. She believes the timing is urgent.

As a politically divided nation gears up for the November presidential election amid the relentless COVID-19 pandemic and protests about racial injustice, 2020 could be the biggest year yet for spreading fake news. Along with Russian bots and online trolls, ordinary Americans are big spreaders of fake news on social media, often unwittingly. Many users have a hard time ferreting out truth from falsehood, says Moravec.

“Ideally, we would see Twitter, and especially Facebook, use some type of flag for misinformation with a brief statement to nudge people to think more critically.” — Tricia Moravec

An Effective Combo

The researchers focused on social media because that’s where more than 60% of adults get their news, with Facebook being the world’s most used platform.

Moravec has already shown how readily Facebook users were beguiled by fake news. In a study published last fall, her team tested whether Facebook’s fake news flag was effective at getting users to spot misinformation. The team had participants wear a wireless electroencephalography headset to measure their brain activity while they read real and fake political news headlines. The participants identified false information correctly just 44% of the time and overwhelmingly deemed political headlines to be “correct” when the headlines matched their own beliefs.

That’s a problem for a democracy, Moravec says. Would a stronger warning about false information be more effective at helping Facebook users identify fake news? That’s what Moravec and her two colleagues wanted to find out in this newest study.

They first tested the stop sign icon and fake news statement for one second and five seconds. One second is enough time to elicit an automatic gut reaction; five seconds can capture the effect of critical thinking. That was accompanied by brief training, which was a short announcement explaining the warnings. The researchers then tested the stop sign and statement together, with users being trained partway through the study. Finally, to better understand the training’s effect, some participants received training; others did not.

An icon and statement by themselves were effective without any training, but the combined intervention with the training was the most effective.

“It was eye-opening,” Moravec says of the effect. “Even we were surprised with just how effective the combination was.”

The researchers did not compare the willingness of Democrats versus Republicans to share fake news, but other studies have found that while both sides are susceptible, conservatives are more likely to believe it and share it.

Attempts by Facebook and Twitter

Misinformation has been around forever, Moravec says, but social media has made it worse. The trend accelerated during the 2016 election, and the COVID-19 crisis has furthered the fake news epidemic.

Although Facebook in December 2016 started using an icon and brief warning statement to flag fake news, Facebook took it down about a year later. Moravec says Facebook did not do training with users to help them better understand the warning and the statement, “Disputed by third party fact-checkers.” That statement was not entirely clear and may have confused users, she says.

Moravec says the company’s decision to abandon the flagging approach was premature. “Given the varied research that shows that flagging misinformation is helpful, it seems like it would be more productive to test another iteration rather than give up completely,” she says.

Facebook says it now uses technology and fact-checkers to identify false information and moves the information lower in the News Feed so it’s less likely to be seen. Facebook also says that people who repeatedly share false news “will see their distribution reduced and their ability to advertise removed.”

In addition, Facebook on Aug. 5 deleted a post from President Donald Trump linking to a Fox News video in which Trump says children are “virtually immune” to COVID-19. Facebook said the post violated rules about posting misinformation about COVID-19, according to Politico. The news site also reported that Twitter directed that the post be deleted before the user could tweet again. By that evening, the video link was dead.

Twitter, meanwhile, has begun using labels and warning messages.

“Twitter has been doing a much better job than Facebook at managing misinformation, since they actively flag misleading information,” Moravec says. “It is a good step that Facebook is taking to at least demote misinformation and punish repeat offenders, but based on the misinformation I have seen about COVID-19 on Facebook, I do not think their efforts are effective in managing misinformation on their platform.”

The Fake News Landscape

Having a U.S. president who declares news he disagrees with as “fake” is undermining legitimate news, sources, and fact-checkers, Moravec says. With the pandemic raging, “How much could it have helped if Facebook had a flag on false COVID-19 information?” she says.

“Even if it didn’t work for everyone, at least getting some people to change or critically consider some health information is progress.” — Tricia Moravec

Moravec also is concerned about Russia and other countries using Facebook to influence the upcoming presidential election. Facebook could help users avoid being fooled by providing a better tool, such as the combined approach she and her team tested, she says.

“I think it’s ready for prime time,” she says. “At the very least, Facebook needs to try it.”

Appealing to Sense and Sensibility: System 1 and System 2 Interventions for Fake News on Social Media” is forthcoming, online in advance in Information Systems Research.

Story by Mary Ann Roser

--

--

Texas McCombs
Big Ideas

News, business research, and ideas from the McCombs School of Business at The University of Texas at Austin. Learn more at www.mccombs.utexas.edu