The Breakdown: Joan Donovan on domestic misinformation

Q&A covers election misinformation, content moderation, and COVID-19

Berkman Klein Center
Berkman Klein Center Collection
11 min readDec 1, 2020

--

Oumou Ly and Joan Donovan
Oumou Ly (left) interviews Joan Donovan (right) for The Breakdown from the Berkman Klein Center.

The 2020 US election was marked by misinformation and bad actors stemming from domestic sources.

In this episode of The Breakdown, Oumou Ly, an Assembly staff fellow at the Berkman Klein Center is joined by Joan Donovan, the director of The Technology and Social Change Project (TaSC) at Harvard Kennedy School.

Watch the interview from the Berkman Klein Center

Read the transcript, which is lightly edited for clarity

Oumou Ly (OL): Welcome to The Breakdown. My name is Oumou, I’m a fellow on the Assembly: Disinformation Program at the Berkman Klein Center for Internet & Society at Harvard. Our topic of discussion today continues the discussion of the election. I am joined today and thrilled to be joined today by Joan Donovan, who is the Research Director of the Shorenstein Center on Media, Politics and Public Policy. Dr. Donovan leads the field in examining internet and technology studies, online extremism, media manipulation, and disinformation campaigns. Thank you Joan for joining us.

Joan Donovan (JD): Really excited to talk about this stuff today.

OL: So our discussion today centers on domestic actors and their goals in purveying disinformation, and we would be remiss if not to mention that at the time it was this recorded, just last night, the president fired Chris Krebs who was the head of CISA at DHS, the agency within the federal government that takes the lead on countering, mitigating and responding to disinformation, particularly as it relates to democratic processes like elections. Joan, what do you sort of make of this late-night firing, this last-minute development?

JD: If you study disinformation long enough you feel like you’re looking through a crystal ball in some instances. So we all knew it was coming, even Krebs had said so much. And that’s because countering disinformation, it’s a really thankless job, in the sense that it wasn’t just the fact that Krebs had built an agency that over the course of the last few years really had flown under the radar in terms of any kind of partisan divides, had done a lot of work to ensure election security and cared about the question of disinformation, misinformation as it applied to questions about election integrity, right?

So CISA and Krebs [weren’t] trying to dispel all of the crazy myths and conspiracies out there, but they were doing their part within their remit to make sure that any kind of theory about voter fraud was something that they took seriously and took the time to debunk. And so it wasn’t necessarily just the kinds of tweets that were coming out of CISA, but it was really about this website that they had put together that was really a kind of a low budget version of Snopes in the sense that the website it’s called Rumor Control. And the idea was very simple which was provide [a] very rapid analysis of any conspiracy or allegation of election fraud that was starting to reach a tipping point, not everything, but things that started to get covered in different areas, covered by journalists, and to give people an anchor that says, “This is what we know to be true at this moment.”

Of course, as the president has come to refute the election results rather forcefully online, Krebs’ role became much more important as a vocal critic with the truth on his side. And over the last few weeks, especially the last week, we’ve seen Trump move anybody out of his way that would either contradict him in public or would seriously imperil his desire to stay in the White House.

OL: That makes me think of something that I think about a lot recently, particularly over the last four years, but especially in 2020, is this use of disinformation as political strategy by the GOP. It seems like one pillar of that strategy is just [a] want to spread disinformation. The second is to sort of leverage our institutions to legitimize the information that they’re spreading. And the third is just to accelerate truth to get in a matter that’s advantageous to the GOP’s particular political aims. How do you respond to that? And how do you think the information ecosystem should be organizing around that problem? That is, that we have a major political party in the United States for whom this is [a] strategy for them.

JD: They’re really just leveraging the communication opportunities in our current media ecosystem to get their messaging across. And in this instance when we know where the propaganda is coming from, that is it’s coming from the White House, it’s coming from Giuliani, it’s coming from Bannon, it’s coming from Roger Stone, how then do we reckon with it? Because we actually know what it is. So the concept of white propaganda is really important here because when we know what the source is, we can treat it differently. However, the difference between something like what went down in 2016 and what happened in 2020 is an evolution of these strategies to use some automated techniques in order to increase engagement on certain posts so that more people see them, coupled with serious, serious money and influence in order to make disinformation travel further and faster.

The third thing about this communication strategy in this moment is that the problem really transcends social media at this point where we do have our more legitimate institutions starting to bow out and say, you know what? We’re not even gonna try to tackle this, where like for us, it’s not even an issue, because we’re not gonna play into allegations that there’s voter fraud, we’re not gonna play into any of these pet theories that have emerged about Hammer and Scorecard and Dominion. And if you’ve heard any of those keywords then you’ve encountered disinformation. But it does go to show that we are immersed in a hyper-partisan media ecosystem where the future of journalism is at stake. The future of social media is at stake. And right now, I’m really worried that the US democracy might not survive this moment.

OL: I completely agree with you. And that is a really scary thing to think. Can you talk a little bit about sites like Parler, Discord, Telegram, Gab. Just recently after the election, Facebook disbanded a group called Stop the Steal, and then many of those followers found a new home on Parler. The Parler app then went #1 on Apple’s iTunes downloads chart. Why are sites like this so attractive to people who have a history of creating affinity around conspiracy theories?

JD: So I think about Gab, for instance… me and Brian Friedberg and Becca Lewis wrote about Gab post the Unite the Right Rally, because Gab really put a lot of energy into recruiting white supremacists who were being removed from platforms for terms of service violations. And they were basically saying, “we’re the free speech platform. And we don’t care what you say.” And for Gab that went overhead pretty fast, where they did have to start banning white supremacists. Because unfortunately what you get when you make a platform that emphasizes lack of moderation is you get some of the worst kind of pornography you can ever imagine, no style, no grace, nothing sexy about it just — Here’s a bunch of people in diapers, right? Like it’s just not good. And so right now these minor apps that are saying on we’re unmoderated, come one, come all are actually facing a pretty strong content moderation problem where trolls are now showing up pretending to be celebrities. There [are] lots and lots of screenshots out there where people think they heard from some celebrity on one of these apps. And it’s really just a troll with a fake account. But this moment is an opportunity for these apps to grow and they will say and do anything in order to capture that market segment.

Right now, we’re having a crisis of stability in terms of content moderation policies.

If we think about infrastructure, as all three things: the technology, the people that bring the technology together including the audiences, and the policies. Right now, we’re having a crisis of stability in terms of content moderation policies. And so people are seeking out other platforms that increase that kind of stability in their messaging because they wanna know why they’re seeing what they’re seeing and they want for those rules to be really clear.

OL: Picking up on that content moderation thread to talk about larger and sort of more legacy tech platforms more broadly, what is your sense of how well content moderation, and maybe even more specifically labeling efforts, work? We saw Twitter and some of the other platforms to do a pretty, I think, comparatively good job when compared with the past. Slapping labels on the president’s tweets, but that’s because there was an expectation that there would be premature claims of victory. What’s your sense of how well it minimizes virality?

JD: So we don’t really know or have any data to conclude that the labeling is really doing anything other than aggravating people.

OL: Yeah.

JD: Which is to say that, we thought that the labeling was gonna result in massive reduction and virality. In some instances you see influencers taking photos or just screenshots of the labels on their tweets on Twitter, and then you saying like look, it’s happening to me as a kind of badge of honor. But at the same time when done well, they convey the right kind of message.

Unfortunately, I don’t think any of us anticipated the amount of labels that were gonna be needed on key public figures, right? And so I imagine that okay, they’re gonna do these labels for folks that have over a hundred thousand followers on Twitter, or they’re gonna show up on YouTube and in ways that deal with both the claims of voter fraud, but also the virality. But it’s hard to say if anybody’s clicking through on these labels, I’ve clipped through some of them and the information on the other side of the label is totally irrelevant. That is it’s just not about the tweet or any, it’s not specific enough.

Which is to say that, in watching the tech hearing this week, Dorsey seem[s] to not really be committed to a content moderation policy that deals with misinformation at scale. And as a result, what you get is these half measures that we don’t really know what their effect is gonna be. And for the partners in the fact-checking world that power partnered with Facebook, they’re now under a delusion of allegations that they’re somehow partisan and they’ve been weaponized in a bunch of different ways. And so I don’t even know what the broad payout is to risk your reputation as a news organization, to do that kind of fact-checking on Facebook, where Facebook isn’t really committed to removing certain kinds of misinformation.

OL: Joan, why is medical misinformation different than other types of information we see circulating, maybe related to elections or other democratic processes?

JD: So when we think about medical misinformation, we’re really thinking about well, how quickly are people gonna change their behavior, right? If you hear that coronavirus is in the water, you’re gonna stop drinking water, right? If you hear that it’s in the air, you’re gonna put a mask on. And so the way in which people receive medical advice, really it can stop them on a dime and move them in a different direction. And unfortunately, we’ve entered into this situation where medical advice has been polarized in our hyper-partisan media environments. And there’s been some recent studies that can even show the degree to which that polarization is happening that is really leading people to downplay the risks of COVID-19. And this has a lot to do with them encountering misinformation from what they might consider even trusted sources.

Unfortunately, we’ve entered into this situation where medical advice has been polarized in our hyper-partisan media environments.

And so when we think about the design of social media in this moment, we actually have to think about a curation strategy for the truth. We need access to information that is timely, local, relevant, and accurate. And if we don’t get that kind of information today, people are going to continue to die, because they don’t understand what the real risk is, they don’t understand, they don’t understand how they can protect themselves. And especially as we enter into this holiday season, where a lot of people are starting to relax their vigilance and are hoping that it won’t happen to them, that’s the exact moment where we need to crank up the health messaging and make sure that people understand the risks and have seen some form of true and correct information about COVID-19. Because I’ll tell you right now if you go on social media and you start poking around, sure there’s a couple of interstitials, or there’s a couple of banners here and there, but we can do a lot better to make sure that people know what COVID-19 is, what the symptoms are, how do you get tested, how to keep yourself safe and how to keep your loved ones safe as well.

OL: I’m just curious what are the sorts of data points you’ve seen that would explain why some people don’t necessarily believe information from authoritative sources about the spread of COVID-19. Why are some people inclined not to believe that authoritative information?

JD: It’s a good question. And part of it has to do with the echo chambers that they’ve been getting information in for years. We’ve started to see certain Facebook groups, that maybe it’s a local Facebook group and you’ve been in it a long time and is about exchanging like of the free list, exchanging things in your neighborhood. And then people slowly start to talk about these really important issues. And misinformation is introduced through a blog post or an article. Or I saw this on the quote-unquote news, and you find out that they’ve been watching one of these hyper-partisan news sources that [are] downplaying what’s happening. And so you kind of see it in the ephemera, but our journal, the Harvard Kennedy Misinfo Review, we’ve published research around even within the right-wing media ecosystem, the degree to which someone watches a lot of, let’s say Hannity versus Tucker, they’re gonna have different associations with the risk of COVID-19, because it’s covered differently these folks that are at the same outlet. And so it’s really important to understand that this has to do with the communication environment that is designed and the fact that people are really trying when they’re sharing things that are novel, or outrageous or things that might be medically incorrect. They’re doing it in some cases out of love. They’re doing it just in case, and maybe you didn’t see this. And it’s an unfortunate situation that we’ve gotten ourselves into, where the more outrageous the conspiracy theory, the more outlandish the claim, the more viral it tends to be. And that’s an unfortunate consequence of the design of these systems.

OL: Thank you so much for joining me today Joan, I really enjoyed our conversation.

JD: That’s great, thank you so much. I really appreciate you doing these series.

--

--

Berkman Klein Center
Berkman Klein Center Collection

The Berkman Klein Center for Internet & Society at Harvard University was founded to explore cyberspace, share in its study, and help pioneer its development.