Distraction Helps Misinformation Spread. Thinking About Accuracy Can Reduce it.

Jigsaw
Jigsaw
Published in
5 min readMar 17, 2021

--

Jonathan Swift wrote that “falsehood flies, and truth comes limping after it.” It’s true that misinformation can spread fast and far, but science’s attempts to understand the nuanced human behavior behind spreading fiction online still leaves open questions. Is it because we instinctively gravitate toward highly novel and emotional content? Is it because we lack the information literacy to discern authoritative sources? Or are we only believing what we want to believe, disregarding countervailing evidence?

New research published in Nature, from a team led by Gordon Pennycook, offers a partial explanation: when we’re online, we sometimes become distracted and simply forget to think about whether information is true. Inspired by these findings, Jigsaw teamed up with academics behind the study to explore translations of their laboratory findings into technological features that might help fight online misinformation. Our work in this space is still early, but we wanted to share what we’ve learned so far, in the hope of helping (or connecting with) others studying similar issues.

Distraction drives engagement with misinformation

Pennycook and his co-authors claim that most people generally want to engage with accurate information online, and that people are able to reasonably discern the quality of information online — when we try. To test this, Pennycook’s team conducted a series of laboratory experiments. First, they asked internet users to tell them what characteristics matter most when deciding which information to share on social media. An overwhelming majority reported “accuracy” as being the highest priority.

Participants overwhelmingly agreed that accuracy was important when making social media sharing decisions. Adapted from Pennycook, Gordon, et al. “Shifting Attention to Accuracy Can Reduce Misinformation Online.”

They also asked internet users to rate the accuracy of a set of news headlines, half true and half false. People rated true headlines as accurate about 80% of the time and false headlines as true roughly 10% of the time. Whether the headlines supported a favored political position had relatively little effect. But when they asked which of the headlines participants would share online, people became much more willing to engage with misinformation. There was a disconnect between participants’ stated desires for accuracy, their demonstrated abilities, and their behavior.

Participants could easily identify false headlines when asked to judge accuracy, regardless of alignment. However, veracity had little impact on sharing intentions. Adapted from Pennycook, Gordon, et al. “Shifting Attention to Accuracy Can Reduce Misinformation Online.”

But why is distraction the best explanation for this disconnect? For one thing, other work shows that internet users who share more misinformation online tend to score lower on tests measuring propensity to stop and engage in analytical thinking. But more importantly, when the researchers subtly shifted attention back towards accuracy before asking which headlines participants would share online, people were up to three times more discerning about the truth of what they shared.

An early prototype accuracy prompt, similar to that used by Pennycook et al., asking users to reflect on the accuracy of a news headline before continuing to browse. (Source: Jigsaw)

This research suggests a potentially powerful way to reduce misinformation online simply by reminding internet users to think about accuracy. This approach avoids the challenges of “labeling” information true or false. It avoids the “implied truth effect,” whereby labeling a subset of information as “false” may promote overconfidence in the accuracy of unlabeled information. Furthermore this approach is inherently scalable, as accuracy reminders can be liberally served across a variety of content.

From the laboratory to the Internet

This is where Jigsaw got involved. Research illuminated some new potential ways to fight misinformation online, but we needed to see how these ideas held up under more complex scenarios. We enlisted the help of Gordon Pennycook, as well as David Rand and Adam Berinsky, to conduct an additional series of laboratory tests aimed at preparing accuracy reminders for real-world applications. In an effort to promote transparency and contribute to the field, we agreed ahead of time that all data derived from this experiment would be owned by our partners, and no prohibitions would be placed on academic publications.

We started by designing a new “accuracy prompt” user experience that serves digital literacy tips to promote accuracy. We designed this experience in a way that is transparent to the user about why they are seeing the feature. Our initial tests showed that these minimally invasive literacy tips increased truth discernment by roughly 50%, effects similar to those found in the original study. You can read more about our initial work to develop literacy tip accuracy prompts in this preprint copy of a paper under review in a peer-reviewed academic journal.

An animated version of Jigsaw’s “digital literacy tip” experience. Variations on this design were tested for efficacy across multiple dimensions. (Source: Jigsaw)

We then conducted some further experiments demonstrating that the literacy tip approach worked roughly the same for written information across both video and image-heavy formats, and also experimented with explicitly branding the accuracy prompts to see if the branding made the feature more or less effective (it didn’t seem to affect the outcome). Other experiments explored a more diverse set of demographics, including individuals with higher conspiracy beliefs and lower trust in mainstream institutions (these results were inconclusive). And finally, we worked to determine whether people understood the purpose of the literacy tips and found them helpful (they seemed to, in both cases).

Overall, our results suggest accuracy prompts work across a wide range of experimental settings and are at a point where technology companies could consider testing them on their own services, beyond what is possible in a laboratory environment. We also have research underway with our academic partners that aims to understand where limitations might exist for accuracy prompts, such as their effectiveness across cultures. And in the future, we hope to further explore how engagement rates or perceptions of high-quality sources could be impacted across different types of content, in addition to a range of other potential unintended consequences. We’ll continue to share our research and findings along the way.

Gently reminding internet users to stop and think about accuracy is a great first step, but there is no silver bullet to counter online misinformation. For this reason, we are exploring a range of approaches, including how to “inoculate” people against misinformation by building core information literacy competencies. We will share more about this approach in our next essay.

By Rocky Cole, Research Program Manager at Jigsaw

--

--

Jigsaw
Jigsaw

Jigsaw is a unit within Google that explores threats to open societies, and builds technology that inspires scalable solutions.