The Breakdown: Lisa Kaplan on domestic disinformation

Q&A explores threat actors, early detection, and content moderation

Berkman Klein Center
Berkman Klein Center Collection
14 min readMar 12, 2021

--

Oumou Ly and Lisa Kaplan on a zoom call
Oumou Ly interviews Lisa Kaplan for the latest episode of The Breakdown from the Berkman Klein Center.

From 2016 to 2020, research, civil society organizing efforts, industry planning, and national security responses to disinformation centered on the issue from the perspective that malign influence was propagated almost exclusively by foreign adversaries. This thinking was upended to a large extent in 2020 when disinformation surrounding the COVID-19 pandemic, the presidential election, and other sustained-attention information events exposed the severity of the threat posed by domestic disinformation.

In this episode of the Breakdown, Oumou Ly is joined by Lisa Kaplan to further unpack what accounts for the shift in focus, whether stakeholders might consider adjusting mitigations already deployed to combat disinformation, and assess how current organizing around the disinformation problem can be reevaluated and repurposed to suit new challenges.

Watch the recording from the Berkman Klein Center for Internet & Society

Read the transcript, which is lightly edited for clarity

Oumou Ly (OL): Welcome to The Breakdown. My name is Oumou, I’m a fellow on the Berkman Klein Center’s Assembly: Disinformation Program. I am recording today from California, and that is why my background doesn’t appear the way it normally does, but I am excited nonetheless to be joined by Lisa Kaplan of the Alethea Group. Lisa founded the Alethea Group to help organizations navigate the new digital reality and protect themselves against disinformation. Thank you for joining us, Lisa. I’m so excited to have a conversation with you about this and many, many other pertinent issues.

Lisa Kaplan (LK): Thanks so much for having me, excited to be here.

OL: Yes. So our conversation today centers on a really, really big topic in the disinformation space, and that is the shift in focus among a bunch of different stakeholder groups, including the national security community, academics, civil society, and others, in focusing on disinformation from a national security/foreign policy geopolitics perspective. Can you give us a little taste of your background and talk about what compelled you to found the Alethea Group?

LK: So I started the Alethea Group in 2019. And prior to that, I was the digital director on a 2018 Senate campaign.

One of the things about disinformation that I always like to remind people is that it’s not always a foreign government and it’s not always people who are seeking necessarily geopolitical ends. The goals for threat actors really do vary depending on the threat actor. And that’s one of the reasons why the hard work, the attribution that we do at Alethea Group is so important. But it does depend on who the actor is and what their motive is. And once you know that, you can infer what their goal may be and it can help to mitigate a situation before it even starts or we use [it] to mitigate a situation that’s elevated into more of a crisis situation.

So I was working for Senator Angus King and because he was running against a Democrat and a Republican, we were looking at disinformation narratives from both sides, not necessarily doing the work of attribution, but because we were a campaign, we had limited resources, trying to understand what the narratives were that were out there. And what we realized is the disinformation, it was targeting candidates, it was targeting issues, but at the end of the day, it’s really targeting voters, us as people. And when you think about it as an election context, it’s really straightforward. It’s trying to influence people’s decisions around when, where, and who to vote for. You know targeting those decision points. Are you going to vote? And then if so, for whom are you gonna cast your ballot?

However, disinformation, it’s not an issue that’s limited to just elections and especially for sophisticated actors. So think foreign adversaries, think for-profit disinformation networks who have built up influence to then sell to the highest bidder or are building up influence to be able to generate ad revenue through clicks, for example. They’re not just talking about one election or one candidate, they’re talking about a variety of different issues. And so I say that because for us, and it’s the proliferation of this, the number of threat actors since 2016 has been exponential. What started as being primarily Russian, we now have, according to Oxford, over 80 countries who are actively engaging in social media manipulation, and that doesn’t even account for all of the individuals who have stood up their own operations. We also see both sides of the aisle to be clear, political consultants engaging in similar behaviors as on social media manipulation tactics. So it really comes down to who is the actor and what is the goal. Now fast forward. One of the things that we saw, unfortunately, and it’s very unfortunate that it took essentially an insurrection attempt at the US Capitol to really catapult this conversation.

And there are a variety of research organizations including ours that had been beating this drum for several years now. And so people were saying, oh, it’s just a meme, but no, it can really, you know, disinformation really can lead to offline harms. And we know that. We’ve seen that with Pizzagate, We’ve seen that with different events such as the LA Dodgers stadium being shut down.

So we are seeing more and more offline action happening as a result of disinformation. I think one of the things about what happened at the US Capitol, is it’s a really important case study. And I say that, and I know it sounds clinical but because what happened was obviously a horrible day for our democracy. And I think it affected the research community in a variety of different ways. That was a very tough month for everyone. So I don’t wanna sound overly clinical about it, but in a lot of ways, there are a lot of lessons that can be learned. So for example, we can draw a straight line from some of the narratives that were happening in March saying that the election would be rigged, that there would be political violence, that people needed to start preparing for the worst, to what happened on January 6th. One of the good things to come out of January 6th, though, is that most people get a second chance. Not everybody, obviously there’s a Capitol police officer who died in the line of duty protecting the Capitol, but we’re able to really learn from this moment and move forward so that we’re addressing this threat in a way that does make it so that we’re able to identify these online threats and prevent them from becoming offline harm.

OL: Thank you for that summation of not only what happened on January 6th but for illustrating that you can draw a through-line between the initial Stop the Steal on narratives and organizing to what ended up happening on the 6th. And what I think is also projected to continue this week on the 4th. Can you talk a little bit about your practice at Alethea Group and how you help organizations navigate through this new digital environment?

LK: So at Alethea Group, what we do is we detect instances of disinformation, misinformation, and social media manipulation, as well as track other types of online harm such as targeted harassment to be able to help individuals figure out how to navigate this new digital reality. I think that a lot of times people don’t realize how many options they actually have. So for example, if you’re able to detect what a lot of people are doing right now is they’re analyzing something once it’s already gotten onto Twitter, onto Facebook, onto YouTube, for example, and it’s on one of these mainstream platforms and they call the social media platforms and they say, please take it down. That’s one option. But if you’ve gotten to that point, in a lot of ways it may already be too late. So what we do is we practice early detection. So we catch narratives when they start and we’re able to then track them and understand how they may be influencing individuals and seeking to change individual’s behavior. We analyze on the network level. And what that does is it enables us to take more options.

So for example, let’s take these for-profit disinformation networks. Well, you know, if they’re making a profit off of a potential organization, there could be an opportunity to seek damages. If they’re misappropriating intellectual property such as your name, your likeness, trademarkable or patentable, or copyrighted material, there are legal options that organizations can take to seek recourse. And if there is an opportunity to do counter-messaging, and I’m not talking about fact-checking and our experience fact-checking, while it’s helpful in creating a paper of record or it’s helpful in giving something to point to, to set the story straight, it’s not sufficient. And here’s why. The people who are choosing to believe in a narrative aren’t necessarily going to believe something salacious about — change their mind — and say they no longer believe something that’s totally salacious about you and confirms their biases about you just because you’re the one that said it isn’t true. So we’re talking about really counter-narrative building. And that can be done in an ethical way by creating a greater understanding around an individual or an organization. So what we do is we help through the entire process. So we detect what’s happening, we assess whether or not it’s having an impact on an organization’s goal. And then we help to mitigate any potential impacts, the idea being that we can solve a lot of problems before they become a real issue or a challenge for an organization.

OL: Along with the expansion of the threat landscape, there has been sort of an uptick in sustained information events, like the COVID pandemic, like Stop the Steal, and just a lot of that, not just disinformation, but chatter around the election. Are there other structural developments or other factors that you attribute to the expansion of the threat landscape and the shifts that you know the shift in focus from foreign disinformation to domestic?

LK: Yeah. I think one of the things about disinformation and, you know, there was conversation around how this was gonna happen after 2016 because it’s fast, cheap, and easy to do. If you know how to run a good marketing campaign, you can probably figure out how to run a disinformation campaign. And so with that, and then you also look historically, open-source estimates say that the Russian inner attempt to influence the 2016 election cost a million dollars, which is a rounding error to most large organizations or federal governments. So it wasn’t a question of if other organizations were going to start engaging in these tactics, it was a question of when. And I think what we’ve seen is the proliferation of that. It’s the proliferation of the number of threat actors that’s causing an increased number of threats when it comes to the disinformation landscape.

So going to the COVID-19 pandemic, and these narratives have shifted over time and they vary again, based on the threat actor. So one thing we saw, and again this is all open-source, just Russian state media. You know we did see Russian state media pushing race narratives targeting different communities and trying to pit them against each other. We saw RT put out content that was more focused on, you know, “what’s the big deal with calling it the China virus,” for example, “we called it the Spanish flu.” And RT typically targets one audience. And then we saw it in the NOW, which is, typically targeting younger audiences on Instagram with videos being like, “if you are Asian, you will get attacked in New York City.” And it’s not to say that people haven’t experienced those sorts of attacks. And, you know, as a result of disinformation, going back to the online to offline action, and I don’t wanna make it sound like it’s just the Russian government because there are definitely other actors who are playing in this space.

But it is I think just providing more opportunities from that perspective. And then, similarly, there’s the financial motivations and the financial perspective. So we’ll see some of these junk domains pushing false information about the pandemic and they’re doing so because they are likely to generate a profit. They have advertising revenue that they’re getting from clicks. So I think it really does depend on who the actor is as to why they’re pushing it, but because it’s so cheap, easy, and potentially lucrative to do as well as it works from a geopolitical perspective. So, and if it’s cheap, why not try it? There’s not really a high cost to people who are executing these sorts of campaigns.

OL: Not at all. And I would just echo what you said by pointing out that it’s been so interesting that since 2016, you know our own elected leaders see value in exploiting the sort of fissures in society that we have and exploiting them for their own political gain. And as you mentioned, this happens both on the right and on the left. I wanna shift gears and talk about some of the specific mitigations.

So one of the most significant impediments to legislating on disinformation and engaging on it from a policy or regulatory perspective is that moderation is often criticized as being a brush up against the First Amendment. What is your thinking around how regulation can encourage effective moderation without brushing up against those First Amendment concerns? And then I guess also more broadly, what is your thinking around what the government’s role should be in addressing domestic disinformation?

LK: Well, I do appreciate and admire a lot of our colleagues in this space who are putting out really important research that can inform eventual legislation. Where I see the opportunity for more immediate action is actually through other means such as our judicial system. And I’ll get to that in a second. When it comes to the First Amendment, that’s an argument that I just don’t really buy. We accept limits to speech in real life. I can’t yell fire in a movie theater — like there will be consequences for me. We’ve accepted that there are some limits on speech in the real world.

And so I think we should be reframing this question to say, how can we make the online conversation more reflective of the conversations that we have offline? I think one of the things that’s become really clear in the pandemic when we’ve all been forced inside and online to a degree is, that that’s not where we’re at right now but there’s no reason why that’s not where we can be. I think what we need to be talking about too, though, is when it comes to content moderation, I don’t think that that necessarily removes the threat altogether. For example, we saw a lot of accounts be deplatformed for breaking the terms of service multiple times. And again, I think that’s a perfectly acceptable consequence. It’s kinda like the “no shirt, no shoes no service” version of social media platforms. It’s like, if you break the rules on multiple occasions you’re not gonna be allowed back in. But where I think we need to be headed and where I think we’re going to see the most progress in the immediate, is enforcing some of the laws that are already on the books.

So for example the Dominion lawsuit that’s ongoing right now is something that I’m paying attention to. I think that that is a potential avenue. We’ve also seen some successes when it comes to copyright infringements because bad guys aren’t really paying attention to following the laws if they’re spinning up disinformation campaigns. So I think that we can potentially anticipate some actions taken there. I think that the other thing that has been interesting to watch unfold is, the Stop Hate for Profit sort of activism. I think that that is potentially having an impact as well. So all that said, I don’t see this as a speech-only issue. I think that there are very serious concerns especially when we’re thinking about the world outside of the United States and the Western liberal order where sometimes social media is how you evade censorship. And so these are really tough challenges and there are no easy solutions. And I think having these conversations and debates is hugely important because if this were an easy fix, it would be solved by now. But I am confident that this is something that we will begin to see, and we’ve seen progress since 2016 to be fair. But we will continue to see progress. We will continue to see solutions proposed and implemented.

OL: You had mentioned earlier that you have observed that fact-checking is not necessarily the most effective way to clamp down on some of the bad and false information we see circulating online and certainly not enough to interrupt the cementing of alternate realities that really disinformation is intended to foster.

LK: Yeah. So I think that fact-checking definitely plays a role but I don’t think it’s something that can be done alone. I think, you know, labeling that sort of thing. These are relatively new features. And so, well, how effective they are — that’s a little bit outside of my purview. And I look forward to reading somebody’s research someday that did a longitudinal study on how effective labels were on any given social media user. But I do think we need to start thinking about this challenge more holistically than we currently are, right? So for example, it’s not just what’s happening on social media platforms, it’s also what’s happening on blogs. It’s how some of these disinformation networks are being financed. It’s through advertising revenue, it’s through, frankly, people who are building influence and then turning around and selling it for profit. Those are all things that I think we can incorporate into part of the solution when we’re trying to figure out, how do we put out the fire that’s happening right now?

I think when we look towards long-term solutions to figure out, what is it that we can do to really change the equation to make it so that disinformation is not as successful as it is now? There’s all kinds of things we can do but I keep coming back to education. If every single person knew what we knew and likely what everybody who’s listened to these podcasts knew, we’d be in a lot better shape. So how do we make it so that we’re not so special anymore? How do we make it so that the general population is more resilient to disinflation? And a lot of that has to do with education. So when I was in high school we used to do these things where we would basically learn how to read a news article. We would learn how to read a newspaper based on, you know what page was the article on in the newspaper for relative importance. What paper were we reading? Who’s the author? When was the paper published? How do you read the first two paragraphs? How do you read for bias? Why aren’t we teaching that for a digitized world? Because I am still pretty old school and I like physical newspapers, so I get them but I am like the rarest subscriber you will see for my age demographic. Like one of the things we need to consider is how do we modernize our approach to consuming media? And I don’t wanna put it on the social media user to protect themselves from a sophisticated information operation. But again, when you think about it as a holistic solution this can also be a piece of the puzzle that really makes the difference.

OL: So Lisa, my final closing question for you is, do you have anything that you think about as something that could be maybe a smaller low-intensity effort, but could make a world of difference in terms of moving the needle forward on this problem?

LK: Yeah, so I think that I always come back to, if every individual just stopped and thought before they hit send, or tweet, or post, or share, or whatever, I think we are all potential threat vectors, for furthering a disinformation or misinformation flow because disinformation and misinformation target us and our biases. And so by not sharing it and spreading it, we’re not making the problem worse. So I think if everybody did that, it would go a long way.

OL: Thank you so much, Lisa. I really enjoyed our conversation. Thanks for joining me.

LK: Thanks for having me.

--

--

Berkman Klein Center
Berkman Klein Center Collection

The Berkman Klein Center for Internet & Society at Harvard University was founded to explore cyberspace, share in its study, and help pioneer its development.