Exploring New Ways to Build Resilience to Misinformation

Jigsaw
Jigsaw
Published in
3 min readMar 17, 2021

--

Much of the conversation about misinformation online — an ever-evolving, difficult to define problem itself — centers on identifying it, reducing it, and removing it. Platforms use machine learning systems to identify and take action on content that violates platform policies, either by removing it or, for content that skirts the line of what’s permitted, reducing it in recommendations. Algorithms are powerful tools that constantly improve over time. But there are other approaches that can help address the broader objective of making people more resilient to misinformation they might encounter on the internet.

Like others, Google has been studying this issue for some time, examining how to apply proven behavioral science interventions to help people evaluate information quality online. We’re hopeful that the broader research community can benefit from us sharing more of our insights. Our goal is to bridge the gap between the academic researchers and the technologists developing innovative new approaches to combating misinformation online. We collaborate with anthropologists, cognitive scientists, political scientists, and psychologists, alongside representatives from affected populations and our colleagues at Google who study user experience, all with the goal of contributing to ongoing research about countering online misinformation. Our ultimate objective is to help to develop new technology that can help mitigate the negative impact of misinformation.

We are fortunate not to be starting from scratch. Technology companies are continuing to refine their approaches. Many online platforms already surface “labels” and “information panels” alongside content that provides additional context from news publishers, fact check organizations or online encyclopedias. Fact checks play an invaluable role in the information ecosystem — they provide people with important context, they are a useful source of truth for machine learning models, they alert platforms to content that requires special attention, and they help us measure progress in the fight against misinformation.

Academic studies suggest that these features can reduce false belief, albeit modestly. And as with all interventions, they have limitations. For instance, depending on how they are designed, labels or fact-checks can be hard to scale, different users may receive them differently, and it’s not always possible to immediately label content that would warrant added context. And nearly all features built on fact checks share the common limitation of being designed for those who trust traditional institutions.

Part of our research focuses on people who distrust traditional institutions, including people who subscribe to conspiracy theories. We observed firsthand the shortcomings of traditional “debunking” methods with this group during our ethnography of conspiracy theory adherents, in partnership with ReD Associates–namely: committed believers in misinformation feel their knowledge viscerally and are resistant to debunking efforts like fact checks.

This is still a new, growing area of research that may not result in new technology in the short term. We are committed to supporting the broader research community throughout our exploration process, and we’ll share data about our research goals and findings throughout. And we will continue to uphold high standards of research ethics, for example, by obtaining approvals from academic institutional review boards and consulting with civil society experts.

To begin, we’re sharing some initial insights from our research on two different misinformation intervention strategies — accuracy prompts and inoculation.

By Yasmin Green, Director of Research & Development at Jigsaw

--

--

Jigsaw
Jigsaw

Jigsaw is a unit within Google that explores threats to open societies, and builds technology that inspires scalable solutions.