5 Questions for…David Scales

WNH Editors
What’s Next Health
8 min readJan 31, 2024

David Scales, MPhil, MD, PhD, is an assistant professor of medicine at Weill Cornell Medical College and Chief Medical Officer at Critica, which develops and tests new methods of advancing public understanding of scientific evidence. Dr. Scales and his colleagues Drs. Sara and Jack Gorman partner with Kathleen Hall Jamieson, PhD, director of the Annenberg Public Policy Center at the University of Pennsylvania and co-founder of FactCheck.org, on an RWJF-funded project to develop and test a new protective knowledge model for addressing misinformation. What’s Next Health reached out to Dr. Scales to learn more about the project.

This interview is part of our 5 Questions For…Series, where we learn about the ways RWJF’s Pioneering Ideas for an Equitable Future grantees are helping us get to a healthier tomorrow by paving the way today.

Q: What are you learning through this work? What do you hope to accomplish in the next twelve months?

A: Our collective teams at Critica and Annenberg have been developing what we call an “integrated cross-platform media model” for combating misinformation. A lot of research on misinformation has been somewhat platform-specific and often focused on just one aspect of the problem. Some people are doing tracking, some are focused on response and others on prevention. But we saw a need to look across that whole spectrum. That’s where we saw a gap, and so we submitted a proposal to the RWJF to develop a new approach using public health and epidemiological principles.

As our work progresses, we’ve learned that the epidemiological model is really useful for what I would call “dangerous misinformation.” If you think “what is the Ebola or the cholera of misinformation?” or “what is the stuff that’s going to go viral and be really dangerous if it spreads all over the place?” Rumors like “the vaccine can change your DNA” play on people’s fears and can spread very quickly. My son is in daycare, so I think of pink eye or lice or another contagious health issue where we have training and procedures in place to help mitigate the spread. With the worst viral rumors, it works similarly, where we need health workers who are already trained and procedures in place so corrosive rumors can be quickly addressed.

We’ve learned a lot by applying the epidemiological model, but we’ve also learned where it falls short given some types of misinformation seem to slowly undermine and erode trust. And here an environmental health analogy can be more useful. In this case, misinformation operates more like pollution. It may not kill you suddenly, but it will have dramatic health consequences over time, especially if you live in an environment where you’re awash in it without effective mitigation. We really need to consider both paradigms to help us address the spectrum of misinformation and allow us to think more broadly about our information environments and their impact on our health.

Source: Freepik

Two other quick points about what we’re learning. Misinformation often hits differently in different communities or populations. Claims that the vaccine changes your DNA, for example, catch fire in some communities more than others. In some ways this parallels what we know about disease susceptibility. As we’ve learned with the COVID-19 pandemic, people with underlying comorbidities, like diabetes or hypertension, are more susceptible to the virus than others. So with misinformation, it doesn’t all land the same, and some are predisposed to be more susceptible.

We’re also beginning to better understand why some communities are more resilient to some misinformation tropes even while more susceptible to others. For some, invoking historical injustices resonates deeply and can be more persuasive than a biological claim about vaccines changing DNA. This is more of an illustrative example than an evidence-based one because the evidence is still out. We need more research to understand what factors help engender that resilience and what weakens it.

That leads to my final point. One of the major ways we hope to contribute going forward is to help figure out how to effectively evaluate interventions in this space. We just published a paper recently that’s trying to bring in some quasi-experimental paradigms and evaluation mechanisms into this work that we’re doing in online spaces. We hope to produce a prospective study to assess whether or not the work that we’re doing with what we call “infodemiologists” (think a trained core of online interveners in and of communities) is effectively addressing misinformation in online spaces. We want the results from that study to try to lay the groundwork and create a precedent for how studies can be evaluated moving forward.

Currently, though, we’re in a challenging environment where most of the data that we would use to do analysis is proprietary and not accessible to researchers because platforms don’t allow access to it. Although a new European law taking effect this year will force them to share more data, we’re currently mostly in the dark about what is actually effective, except for what the platforms choose to release and publish in very limited collaborations with researchers. It’s unfortunate, and akin to knowing a restaurant is the origin of a food-borne disease epidemic, but not being allowed to investigate the facility.

Q: What signals of the future or emerging trends were you noticing that led you to want to do this project?

A: We got our first grant to combat misinformation in November of 2019 because we were worried about how the evolving information ecosystem was ripe for infodemics. So when the pandemic happened, it confirmed our concerns.

I think it’s reasonable to ask the people that have benefited from and done an amazing job at creating this new information ecosystem to do much more to address the externalities and side effects. And I think that means investing heavily in some of the public health infrastructure.

Now, the advent of generative AI magnifies this challenge. In the misinformation world, we often talk about what has changed between now and 40 years ago. Misinformation has always existed, so why do we suddenly care? And really what’s changed is the speed and the scale at which misinformation can travel, aided by social media and the Internet.

AI has the potential to again massively increase the speed and the scale at which misinformation can be produced and travel. While it will do powerful and impressive things (just like the Internet), how do we make sure that we get the benefits of such an amazing tool while minimizing the negative side effects?

Ultimately, we can’t just try to debunk or inoculate against misinformation. We need a community-based approach, and that’s reflected in our work with infodemiologists. They’re trying to understand the information flowing within communities and the information needs of communities. In that role, they’re trying to help make communities more resistant to misinformation however it might flow, whether or not that’s through generative AI or through actors out there trying to spread disinformation. We’re trying to make communities more resilient to misinformation.

Q: What one thing should people read, watch or listen to that will help them understand more about your ideas?

A: A paper in the Annals of Internal Medicine from the American Board of Internal Medicine and coauthors (full disclosure, I’m one of them) captures important frameworks to think about this issue. The conclusions of that paper put into context where we are in this post-pandemic phase and what we can learn from the interventions employed to try to address the accompanying infodemic.

And there’s more work on the way. Be on the lookout for something from the World Health Organization (WHO) and National Academies of Science over the next few months.

Q: Looking ahead five, ten, fifteen years from now, how do you see this work contributing to a healthier, more equitable future?

A: My hope for five, ten, or even fifteen years from now is to see coalitions in place that allow for collaboration, information sharing, and partnerships to reduce duplication of effort and increase the impact of our field. For example, there are a lot of different groups out there doing social listening. Are there ways that we can consolidate some of that social listening so we don’t have so many different systems? Instead let’s have more robust ones that can then be tuned to specific communities and allow for better collaboration with communities to make sure it’s all done with their engagement, consent, and approval?

Photo by Sajad Nori on Unsplash

I would also love to see standard metrics. To do a randomized controlled trial, there are guidelines, like PRISMA guidelines. But I really think there needs to be some degree of recognition that our information environments are crucial to our health. I like to argue that information environments are social determinants of health, and as such, we need to think a lot more about the different factors that go into creating our information environment. How do we make sure they’re not polluted with toxins? What are some of the ways that we can do that? And inevitably, I think that’s going to require things that are controversial at this point, things like content moderation or regulation.

Q: What didn’t we ask you?

A: Sometimes people ask me “How would you scale this?” One of the challenges that I see when we’re talking about scaling interventions in this space is that we have this entirely new information ecosystem that requires us to build different infrastructures. I think it’s reasonable to ask the people that have benefited from and done an amazing job at creating this new information ecosystem to do much more to address the externalities and side effects. And I think that means investing heavily in some of the public health infrastructure. The example I like to use is sanitation. Three hundred years ago, sanitation, or all of the different things that have drastically reduced pollution in our world, were taken for granted. It was only in the late 1800s, early 1900s, that people started to say, “We’re fed up with this and we want the infrastructure necessary to keep our families safe.”

If people live in highly polluted information environments, where the bulk of what they are exposed to and consume on a daily basis is either entertainment or misinformation, we need a public health approach to tackle that problem. We need to focus on the structural, organizational, community, and individual factors that contribute to their immersion in that environment. This could mean counseling people about better choices for their “information diet,” building community resilience to misinformation, engaging specialist societies to help ensure the doctors and nurses that our society trusts so much do not spread misinformation, and having public health practitioners weigh in on potential regulations for social media and AI. Sanitation was not just improving water, but it was building various infrastructures, through laws and other incentives, to help make our environment healthier. We have a long way to go, but I think we can get there.

The views expressed are those of the interviewee(s) and do not necessarily reflect those of the Robert Wood Johnson Foundation.

--

--

WNH Editors
What’s Next Health

Creating and curating content for the publication, What’s Next Health: Exploring Ideas for an Equitable Future.