The Breakdown: Foreign Interference and the U.S. 2020 Election

Election interference and platform interventions in the lead-up to November 3

Berkman Klein Center
Berkman Klein Center Collection
12 min readOct 27, 2020

--

Oumou Ly and Naima Green-Riley
Oumou Ly (left) interviews Naima Green-Riley (right) for The Breakdown from the Berkman Klein Center.

Concerns about election interference and disinformation are rampant in the weeks before the U.S. presidential election on November 3. In this episode of The Breakdown, Assembly Staff Fellow Oumou Ly interviews Naima Green-Riley, a PhD candidate in the Department of Government at Harvard University.

Ly and Green-Riley review recent foreign interference, the weaponization of social issues, and various platform interventions to mitigate the spread of mis- and disinformation ahead of the election.

Watch the interview from the Berkman Klein Center.

Read the transcript, which has been lightly edited for clarity:

Oumou Ly (OL): Welcome to The Breakdown. My name is Oumou. I’m a fellow at the Berkman Klein Center on the Assembly: Disinformation program. I am really excited to be joined today by Naima Green-Riley. Naima’s a PhD candidate at the Department of Government at Harvard University, with a particular focus on public diplomacy and the global information space. She also was formerly a foreign service officer and a Pickering fellow. Welcome, Naima. Thanks so much for joining.

Naima Green-Riley (NGR): Well, thank you so much for having me.

OL: So our conversation today centers on foreign interference in the upcoming election, which is drawing close as of the time of this recording. We’re about two weeks out from November 3rd. One big topic on my mind today is the big threat actors this time around… We know that 2016 was sort of a watershed moment in terms of foreign interference for American democratic processes.

In terms of social media manipulation in particular, how do foreign influence efforts in 2020 look in contrast to active measures we saw in 2016? Have the primary threat actors changed, optimized their methods a little bit, or adopted overall new approaches to influencing public opinion?

NGR: Well, you’re definitely right that 2016 marked the first time that the U.S. started to really pay attention to this type of online foreign influence activity. And during that election year, we saw a series of coordinated social media campaigns targeting various groups of individuals in the United States and seeking to influence their political thoughts and behavior. The campaigns were focused on sowing discord in U.S. politics, mainly by driving a wedge between people on very polarizing topics, so they usually involved either creating or amplifying content on social media that would encourage people to take more extreme viewpoints. So some examples might be that veterans were often targeted. There was this one meme that was run by Russian trolls, basically, that showed a picture of a U.S. soldier, and then it had the text, “Hillary Clinton has a 69% disapproval rate amongst all veterans,” on it. Clearly intended to have an impact on how those people were thinking.

They might also give misleading information about the elections, like they might tell people that the election date was maybe several days after the actual election date and therefore try and ruin people’s chances at using their right to vote. Some disinformation campaigns told people that they could tweet or text their vote in so they didn’t have to leave their homes.

And also there was exploitation of real political sentiment in the U.S., often encouraging divisions and particularly divisions around race. And so there were YouTube channels that would be called things like Don’t Shoot or Black to Live that shared content about police violence and Black Lives Matter, and some racialized campaigns that were linked to those types of sites would then promote ideas like, “The black community can’t rely on the government. It’s not worth voting anyway.”

So that’s the type of stuff that we started to see in 2016, and many of those efforts were either linked to the GRU, which is a part of the General Staff of the Armed Forces of Russia, or the Internet Research Agency, the IRA, of Russia. And many characterized the IRA as a troll farm, so an organization that particularly focuses on spreading false information online.

Since 2016, unfortunately, online influence campaigns have only become more rampant and more complicated.

So since 2016, unfortunately, online influence campaigns have only become more rampant and more complicated. We’ve seen a more diverse range of people being targeted in the United States, so not just veterans and African-Americans, but also different political groups from the far right to the far left. We’ve seen immigrant communities be targeted, religious groups, people who care about specific issues like gun rights or the Confederate flag. So, basically, the most controversial topics are the topics that foreign actors tend to drill deep on to try and influence Americans. And so it’s just gotten more and more complex.

OL: I want to pick up on this point, because so often particularly racial issues form the basis of disinformation and influence campaigns because, like you said, they are the most divisive, contentious issues. In what ways have you seen foreign actors work to weaponize social issues in the United States just this year, since the death of George Floyd?

NGR: Well you know, it’s interesting, because we focus a lot on disinformation as targeted towards the elections, but a number of different types of behaviors and activities have been targeted through disinformation, so we’ve seen people try to manipulate things like Census participation or certain types of civic involvement. And the range of ways that actors are actually different platforms is changing too, so we’re seeing text messages and WhatsApp messages being used to impact people in addition to social media.

But after George Floyd was killed, as you might expect, because it’s a controversial issue that affects Americans, absolutely there was sort of this onslaught of misinformation and disinformation that showed up online. So there were claims that George Floyd didn’t die. There were claims that were stoking conspiracy theories about the protests that happened after his death. And I have to say, not all dis- and misinformation is foreign, so that’s why this is such a large problem, because there are many domestic actors that engage in dis- and misinformation campaigns as well. So the narratives that we’ve seen across the space just come from so many different people that sometimes it can be hard to target the problem to one particular actor or one particular motive.

OL: So in 2016, the Russian government undertook really sophisticated methods of influence, certainly for that particular time and for that election, including mobilizing inauthentic narratives via inauthentic users, leveraging witting and unwitting Americans and social media users. How would you contrast the threat posed by Russia’s efforts with other countries known to be involved in ongoing influence efforts?

NGR: Well, I have to say that Russia continues to be a country of major concern. We saw just recently, this week, the FBI announcing that Russia has been shown to have some information about voter registration in the United States. And Russian disinformation campaigns have definitely re-emerged in the 2020 election cycle, but those campaigns only make up a small amount of the overall activities that Russia is engaging in today, all with the goal of undermining democracy and eroding democratic institutions around the world. That being said, we’ve seen other actors emerging in this space. Within the first few months of the COVID-19 pandemic, Chinese agents were shown to be pushing false narratives within the U.S. saying that President Trump was going to put the entire country on lockdown.

Iran has increasingly been involved in these types of campaigns as well. Recently, they used a mass of emails to affect U.S. public opinion about the elections. And one more thing I want to mention is that this is really a global phenomenon. So these actors, these state actors, often outsource their activity through sort of operations in different countries. So for instance, there are stories of a Russian troll farm that was set up in Ghana to push racial narratives about the United States. And there have also been troll farms that are set up by state actors in places like Nigeria, Albania, the Philippines. And so what’s interesting here is that the individuals who are actually sending those are either economically motivated, they’re getting paid, or they might be ideologically motivated, but they’re acting on behalf of these state actors. And that makes this not just a state to state issue, but a real global problem that involves many people in different parts of the world.

OL: So turning to the platforms for a second, what are your thoughts on some of the interventions platforms have announced so far? Like limiting retweets and shares via private message, labeling posts and accounts associated with state-run media organizations… The list of interventions goes on.

NGR: Yeah, all of the things that you mentioned are a good start, I would say. At the end of the day, I think it’s got to be a major focus on, how can we inform social media users of the potential threats in the information environment? And how can we best equip them to really understand what they are consuming? So I think that part of the answer is for these tech companies to, of their own accord, continue to create policies that will address this issue, but we also need better legislation, and that legislation has to focus on privacy rights, has to focus on online advertising, political advertising, tech sector regulation. And then we need policies that will enforce this type of thing moving forward. So it can’t all be upon the tech companies without that guidance, because I don’t know that they necessarily have the total will to do all that’s necessary to really get at this problem.

Social media companies have already started to label content. They’re also searching for inauthentic behavior, especially coordinated inauthentic behavior online. But I think that there is particular work to be done in terms of the way that we think about content labeling. So when platforms are labeling content, they are usually labeling content from some sort of state-run media. And if it’s a state-run media, much of the state-run media that they’re looking at is not completely a covert operation. It’s not a situation where this media source just doesn’t want anyone to know that it’s associated with the state, but it might be pretty difficult for the audience to actually determine that that outlet is from a state-run site.

So an example would be our RT, formerly known as Russia Today. There’s a reason, I think, that it went from Russia Today to RT. If you go to the RT website, you will see a big banner that says, “Question more, RT.” And then there’s lots of information about how RT works all over the world in order to help people to uncover truth. And then if you scroll all the way bottom to the bottom of the website, you’ll see RT has the support of Moscow or the Russian government… So it’s difficult for people to actually know where this content is coming from.

At the end of the day, I think it’s got to be a major focus on, how can we inform social media users of the potential threats in the information environment?

And this summer Facebook made good on a policy that they had said that they were going to enact for some time, where they now label certain types of content. And basically, they say that they’ll label any content that seems like it’s wholly or fully under editorial control that’s influenced by the state, by some state government. And so lots of Chinese and Russian sites or outlets are included in this policy so far. And according to Facebook, they’re going to increase the number of outlets that get this label. And basically what you see is, on the post, you see, “Chinese state-controlled media,” “Russian state-controlled media,” something to that effect. That’s helpful because now a person doesn’t have to click and then go to the website and then scroll to the bottom of the page to find out that this outlet comes from Russia.

But at the same time, I still think we need to do more in terms of helping Americans to understand why it’s an issue, why state actors are trying to reach them, little old me who lives in some small city or some small town in the middle of America, and how narratives can be manipulated. And so only if that’s done in connection with labeling more of these types of outlets on social media, I think you’d get more impact.

YouTube does something else. In 2018 they started to label their content, but the way they label the content is, they basically label anything that is government-sponsored. So if some outlet is funded in whole or in part by a government, there’s a banner that comes up at the bottom of the video that tells people that. And so you’ll see RT labeled is Russian content, but you’ll also see BBC labeled as British content, so it doesn’t have to do with the editorial control of the outlet.

One final thing on this, because I think this is really important. So I have heard stories of people who, let’s say, for whatever reason have stumbled upon some sort of content from a foreign actor. And so this content might come up because somebody shared something and they watched the video, right? So they watch a video. Let’s say they watch an RT video. Maybe they weren’t trying to find the RT video, and maybe they also aren’t the type of person who would watch a lot of content from RT, but they watch that one video. They continue to scroll on their newsfeed, and then they get a suggestion. “You might enjoy this.” Now the next thing that they get comes from Sputnik. It comes from RT again. So now they’re getting fed information about the U.S. political system that is being portrayed by a foreign actor, and they weren’t even looking for it. I think that’s another thing that we’ve got to tackle, is the algorithms that are used in order to uphold tech companies’ business models, because in some cases those algorithms will be harmful to people because they’ll actually feed them information from foreign actors that might have malicious intent.

OL: Naima, this week the FBI confirmed that Iran was responsible for an influence effort giving the appearance of election interference. And in this particular episode, U.S. voters in Florida and, I think, a number of other states received threatening emails from a domain appearing to belong to a white supremacist group. Can you talk a little bit about what in particular the FBI revealed, and what its significance is for the election?

NGR: Right. So there was a press conference on October 21st in which the FBI announced that they had uncovered an email campaign that was orchestra orchestrated by Iran. The emails purported themselves to come from the Proud Boys, which, as you mentioned, is a far-right group with ties to white supremacy, and it was also a group that had recently been referenced in U.S. politics in the first presidential debate. But actually, now we know that these emails came from Iran, and some of the individuals who received the contents of the email posted them online. So they were addressed to the email users by name, and they said, “We are in possession of all of your information, email, address, telephone, everything.” And then they said they knew that the individual was registered as a Democrat because they had gained access to the U.S. voting infrastructure. And they said, “You will vote for Trump on election day, or we will come after you.”

So first of all, they included a huge amount of intimidation. Second of all, they were purporting themselves to be this group that they were not. And third of all, they absolutely were attempting to contribute to discord in the run-up to the elections. It’s dangerous activity. It is alarming activity. It’s something that I think will have multiple impacts for a time to come, because even though the FBI was able to identify that this happened, that goal of shaking voter confidence of course may have been a little bit successful in that instance.

And so one of the things that is good about this is that the FBI was able to identify this very quickly, to make an announcement to the U.S. public that it happened, to be clear about what happened. Unfortunately, what they announced was not just that the Gmail users were receiving this email and there was false information in it, but they also said that they had information that both Russia and Iran have actually obtained registration information from the United States, and that’s concerning as well.

There appears to be good coordination between the private sector and the government on this issue. Google announced the number of Gmail users that are estimated to have been targeted through the Iranian campaign. Unfortunately, the number is about 25,000 email users, which is no small amount. And so this is just another instance of how not social media but the internet realm, email, can be used as a way to target American public opinion.

OL: Thank you. Thank you so much for joining me, Naima. I really enjoyed our conversation. I know our viewers will too.

--

--

Berkman Klein Center
Berkman Klein Center Collection

The Berkman Klein Center for Internet & Society at Harvard University was founded to explore cyberspace, share in its study, and help pioneer its development.