Q&A: Renee DiResta on Disinformation and COVID-19
Does disinformation related to the novel coronavirus (COVID-19) offer us a learning opportunity?
In this interview, Oumou Ly, an Assembly: Disinformation staff fellow at the Berkman Klein Center, is joined by Renée DiResta of the Stanford Internet Observatory, to discuss what can be learned about disinformation from the current pandemic. The duo explores questions related to authoritative sources, how platforms manage and moderate false health information, and how harassment and disinformation are interwoven.
The interview has been lightly edited for clarity.
Oumou Ly (OL): Today we’re talking about COVID-19, and the spread of dis- and misinformation related to COVID cross the internet. In so many ways, it’s created new problems in disinformation that I think policymakers and those of us who were focused on the issue weren’t necessarily tracking before.
Is there anything new about what we’re learning about disinformation from COVID?
Renée DiResta (RD): It’s been really interesting to see the entire world pay attention to one topic. This is something somewhat unprecedented. We have had outbreaks in the era of social media misinformation before. Ebola, 2014, Zika in 2015 — so there have been a range of moments in which diseases have captivated public attention. But, usually, they tend to stay at least somewhat geographically confined in terms of attention.
The other interesting thing has been that even the institutions don’t have a very strong sense of what is happening, unfortunately. There’s a lot of these unknowns as the disease manifests and potential treatments are broached. Generally, a lot of the mechanics of the outbreak of the pandemic itself are poorly understood. So the information around them is similarly fuzzy. One of the challenges that we really see, here, is the challenge of, how do we even know what is authoritative information? How do you help the public make sense of something when the authorities are still trying to make sense of it themselves, and researchers are still trying to make sense of it themselves?
As we all know, information and data voids provide an opportunity for bad actors to manipulate the situation. So you have both misinformation — where just these gaps in knowledge lead to the spread of unfortunately false stories. This usually comes from a sense of genuine altruism, because people want to help their community. Versus disinformation, where that gap in knowledge is deliberately manipulated by actors who have a particular story that they want to put out, to further a political aim or amplify a conspiracy theory.
I think COVID has really exposed those gaps. It’s less about interesting disinformation tactics because the narratives and manipulation pathways around COVID are no different than the ones we were previously looking at, with regards to election 2020. But what we see with COVID is a lot of real, sustained attention. I think that what that’s shown us is there’s a demand for information, and it’s thereby revealed gaps in platform curation. They don’t have enough information to surface things. They’re struggling with what an authority is, what authoritative information looks like. I think that that’s been one of the really interesting dynamics that’s come out of this. Things that we thought we understood how to handle when the diseases were things like measles and Ebola have not functioned as smoothly in these extremely confusing times with a lot of bad information or just inaccurate information
OL: What makes it so difficult for platforms to prioritize authoritative sources of information, and de-prioritize false content and other sources? Do you think that politically motivated attacks on traditionally authoritative sources of information, like the CDC and WHO, complicate the task of platforms to prioritize what we call “good information”?
RD: Absolutely. There’s a couple of things that have happened here. First, the CDC and WHO operate on the basis of scientific consensus. They’re looking for a sufficient consensus before they update guidance or make a claim. So, with regard to something like masks, you saw the “masks are not useful,” messaging which seemed to fly in the face of evidence that people were consuming from direct sources like scientific papers, conflicting with CDC guidance. That turned into a vast conspiracy that the CDC was trying to prevent Americans from using them to save them for medical individuals.
In reality, it’s much more likely that the guidance that they were giving was from the SARS outbreak. The issue is that the same guidance was put forth to people, without an explanation of, “well, look, this is how we thought about it then. Here’s how we think about it now. This is what we’re still trying to understand. Here are the various probabilities. These are the pathways that this can go.” And so instead of a transparent conversation with the public, in which they accurately conveyed degrees of confidence potential outcomes, it came across as reticence.
Platforms have recognized that when people are searching for a particular keyword or topic, surfacing the thing that is most popular is not the right answer either. Popularity can quite easily be gamed on these systems. But the question becomes about what you can give to people. Is an authoritative source only an institutionally authoritative source? I think the answer is quite clearly no. But how do we decide what an authoritative source is?
Is an authoritative source only an institutionally authoritative source? I think the answer is quite clearly no. But how do we decide what an authoritative source is?
Twitter began to verify with blue checks doctors, virologists, epidemiologists, and others who were out there doing the work of real-time science communication, who were reputable. And so the question became, for the platforms, how do you find these sources that are accurate and that are authoritative, that are not necessarily just the two institutions that have been deemed kind of purveyors of good information in the past. And, per your point, unfortunately, attacks on credibility do have the effect of eroding trust and confidence in the long term.
The other kind of correlator that was interesting was the broken clock problem, where sources that are notoriously bad did call it [the outbreak] early and all of the sudden, were re-legitimized as, well, look, they got this right.
OL: Have platforms substantially changed the way they respond to and mitigate bad information?
RD: One of the challenges is that the real network activism coming from domestic [actors] provides a pretty large amount of the information that people see. If you’re a member of a Facebook group, Facebook does prioritize groups in Newsfeed. You will see the hot-button conversations that are happening in your group because those are the ones getting engagement, so that’s going to hit your feed. So, depending on what communities you’re in, you’re seeing very, very different stories of what’s happening.
That’s a real challenge because it means that the idea of shared authority or shared sense-making ability has eroded, even just between groups of people. You see this a lot in the conversations, right now, around re-opening. Depending on your political alignment, you’re much more likely to be a member of a re-open group, just because the communications that you’re getting from your trusted media authorities — not just peer-to-peer on the internet, but also broadcast media, as well — are telling you a very different story than somebody who’s receiving right-wing versus left-wing news in this particular environment. So I think that’s been a pretty significant challenge.
The platforms did begin to take steps to deal with health misinformation, last year, actually. A lot of the policies that are in place now — why health is treated differently than political content — is that there has been a sense that there are right answers in health. There are things that are quite clearly true or not true. And those truths can have quite a material impact on your life. So Google’s name for that policy was “your money or your life.” It was the idea that Google search results shouldn’t show you the most popular results because, again, popularity can be gamed, but it should, in fact, show you something authoritative for questions related to health or finance because those could have a material impact on your life. But it interestingly wasn’t rolled out to things like YouTube and other places that were seen more as entertainment platforms.
The other social network companies began to incorporate that in 2019.
OL: One of the things that we’ve talked about amongst our groups at Harvard is how difficult it is to come up with answers to questions of impact. How do we know, for example, that after exposure to a piece of false content, someone went out and changed their behavior, in any substantial way? That’s, of course, difficult to measure, given that we don’t know how people were going to behave, to begin with.
Do you think that this has offered us any new insights into how we might study questions of impact? Do you think maybe, for instance, pushes of cures and treatments for COVID might be illustrative of the potential for answers to those questions?
RD: People are doing a lot of looking at search query results. When someone brushes up against a piece of bad info, does that change people’s search behaviors? Do they go look for information in response to that prompt? One of the things that platforms have some visibility into, that unfortunately those of us on the outside still don’t, is the connection pathway from joining one group to joining the next. That is the question for me. When you join a group related to re-open, and a lot of the people in the re-open groups are anti-vaxxers, how does that influence pathway play out? Do you then kind of find yourself joining groups related to conspiracies that have been incorporated by other members of the group?
I think there are a lot of interesting dynamics there that we just don’t have visibility into. But, per your point, one of the things we can see, unfortunately, is stuff like stories of people taking hydroxychloroquine and other drugs that are dangerous for healthy people to take. One of the challenges is that you don’t want the media to report on the one guy who did it as if that’s part of a national trend, because then that is also harmful. Appropriately contextualizing what people do in response is a big part of filling the gaps in our understanding.
OL: If you could change one thing about how the platforms are responding to COVID-19 disinformation, what would it be and why?
RD: I really wish that we could expand our ideas of authoritative sources and have a broader base of trusted institutions, like local pediatric hospitals and other entities. Entities that still occupy a higher degree of trust, versus major, behemoth, politicized organizations. That’s my personal wishlist.
The other thing is that everybody who works on manufacturing treatments and vaccines for this disease, as we move forward, is going to become a target. There is absolutely no doubt that that is going to happen. It happens every single time. Somebody like Bill Gates could become the focus of conspiracy theories. People will do things like show up at his house and all these other things, and he’s a public figure with security and resources. That is not going to be true for a lot of the people who are doing some of the frontline development work, who’re going to become inadvertently famous or inadvertently public figures, unfortunately, just by virtue of trying to do life-saving work. We see doctors getting targeted already.
Everybody who works on manufacturing treatments and vaccines for this disease… is going to become a target.
I think that the platforms really have to do a better job of understanding that there will be personal smears put out about these people. There will be disinformation videos made, websites made, Facebook pages made, all designed to erode public confidence and undermine the work that they’re doing, by attacking them personally. I think we absolutely have to do a better job of knowing that is coming and having the appropriate provisions in place to prevent it.
OL: What do you think those provisions should be?
RD: I think there’s a really interesting debate that we’re going to have to have around “what is a public figure?” I’m not a lawyer, personal disclaimer, there — but there are legal distinctions between “public figure,” and a “limited-purpose public figure,” which means you’re an expert in one particular area, or you’ve held yourself out as a public figure in one particular area, but people on the street wouldn’t recognize you.
The Internet is a really interesting place in that you can inadvertently fall into limited-purpose public figure-hood by sending a wayward tweet. We’ve all seen that dynamic play out. Platforms are reluctant to take down criticism of public figures, for good reason, and when you run for office or you are the CEO of a company or occupy a position of great power, that comes with an obligation to listen to criticism and to receive feedback from the public. I think it’s different, though, when you are an inadvertent public figure or limited-purpose public figure, where there is that gap in how we think about what is the obligation of this person to receive these communications. How should we be thinking about that differently?
I think that this idea around what it means to be public, and, if you’re doing work and service to the public, are you de facto opting-in to becoming a target of one of these internet services? I think that most people probably believe that these people, people who find themselves in that situation, should be afforded a degree of extra care, in terms of managing the harassment and the crazy posts and articles and so on and so forth.
OL: That’s a great point. There is certainly a relationship between disinformation and harassment generally. On COVID-related disinformation, there seems to be a very specific way threat actors use targeted harassment to either keep “good” information from becoming widely available enough to counter bad information, or to dampen credibility of the person or entity attempting to spread “good” information.
Can you talk a little bit more about the relationship between the two?
RD: If you believe that good information is the best counter to bad information, or that good speech is the antidote to bad speech, then you have to understand that harassment is a tool by which those voices are pushed out of the conversation. That is where this dynamic comes into play. You want to ensure that the cost of participating in vaccine research or health communication is not that people stalk your kids. That’s an unreasonable cost to ask someone to bear. And so I think that that is, of course, the real challenge here. If you want to have that counter-speech, then there has to be recognition of the dynamics at play, to ensure that people still feel comfortable taking on that role and doing that work.
This conversation was part of the Berkman Klein Center’s new series, The Breakdown. The first season of the series is produced in collaboration with the Berkman Klein Center’s Assembly program, which for the 2019–2020 year is focusing on cybersecurity approaches to tackling disinformation.
For press inquiries, please email press@cyber.harvard.edu