The Breakdown: evelyn douek on doctored media, platform response and responsibility

Do tech companies have a responsibility for false content beyond the impact of that content on the information environment on their sites?

Berkman Klein Center
Berkman Klein Center Collection
8 min readMay 21, 2020

--

Image: iStock

When it comes to doctored videos, images, and other manipulated media, what is so sticky about the question of takedowns — particularly when the media in question is political in nature?

Divergent responses by platforms to episodes of high profile political media manipulation suggest that considerations beyond the fact of manipulation drive their decision making. Facebook, for instance, has postulated that making a decision on a takedown would make the company an “arbiter of truth.” This raises questions about the obligations of tech companies to society and the level of responsibility companies should assume for the real-world impact of false content. Inherently, these questions concern the “public good” nature of platforms: Do tech companies have a responsibility for false content beyond the impact of that content on the information environment on their sites?

In this episode of The Breakdown, Berkman Klein staff fellow Oumou Ly is joined by evelyn douek, an affiliate at BKC and S.J.D. candidate at Harvard Law School, to discuss manipulated media, platform takedown policies, and responsibility of platforms to society when doctored content stands to distort public perceptions of truth. Ly and douek are both participants of the Berkman Klein Center’s Assembly program, which currently focuses on topics relating to disinformation.

Watch their interview from the Berkman Klein Center’s The Breakdown.

This interview was lightly edited for clarity.

Oumou Ly (OL): An initial question that was the impetus for this topic was when it comes to the doctored videos, images, and other manipulated media. What is so sticky about the question of take-downs, particularly when the media in question is political in nature?

My first question for you, evelyn, is: can you describe the impact of manipulated media both on the online information environment and in the real world? Just generally, what are your thoughts on the harm that this kind of content stands to pose?

evelyn douek (ed): There’s really two categories of harm I think, two buckets of harm when we’re talking about manipulated media. I don’t want to lose sight of the first category, which is the personal harms that can be created to privacy or dignitary interests through the co-optation of someone’s personal image or voice. That’s something that Danielle Citron has written really powerfully about. Upwards of 95% of date fakes and manipulated media are still porn.

I don’t want to lose sight of that kind of harm, but obviously what we’re talking about today is more the societal impacts. That’s still a really, really important thing. But the question also does come up: is there anything really new here with these new technologies? Disinformation is as old as information; manipulated media is as old as media. Is there something particularly harmful about this new information environment and these new technologies that these hyper-realistic false depictions that we need to be especially worried about? There’s some suggestion that there is, that we’re particularly predisposed to believe audio or video and that it might be harder to disprove something fake that’s been created from whole cloth rather than something that’s been just manipulated. It’s hard to prove a negative that something didn’t happen when you don’t have anything to compare it to.

On the other hand, this kind of thing, this concern has been the same with every new technology; that there’s something particularly pernicious about it, from television to radio to computer games. I think [the] jury’s still out on that one. Those are the kinds of things that we need to be thinking about and the potential societal harms that can come from this kind of manipulated media.

OL: More than that, what responsibility do platforms have to mitigate the real-world harm and not just the harm to the online information environment?

ed: That’s really the big question at the moment and the societal conversation that we’re having. It’s nice and simple. I think that, obviously, we are in a place where there’s sort of a developing consensus that platforms need to take more responsibility for the way they design their products and the effects that that has on society. Now, that’s an easy statement to make. What does that look like? That’s where I think it gets more difficult. I think we need to be a little bit careful in translating or drawing a line between content and real-world effects. Often, the causal links are not as clear as we might think that they are. It’s not necessarily as straightforward as that. Causal effects of speech are famously hard to unpack, and they always have been.

I think we do need to be careful in this moment of techlash — that we don’t overreact to the perception of harm and create a cure that’s worse than the disease as well because there are important speech interests here. I’m not a free speech absolutist by any means. I am very much up for living in that messy world where we acknowledge that speech can cause harm and we need to engage in that project. But I do think we also need to not lose sight of the free speech interests that are at play and the good that can come from social media platforms as well.

OL: Definitely. What you just said reminds me of something that has emerged over the last couple of years, certainly since the 2016 election. It’s the idea that a platform can be an arbiter of truth. I think it was Mark Zuckerberg himself who coined that term. At the root of it is this idea that making a decision on whether or not a piece of content, whether it’s false or not, should stay on a site. In a way, [it] makes that decision-maker the decider of what’s true or not.

I wonder, first, how would you respond to that? Secondly, what do you think about that as a justification for allowing false content to remain online in some cases?

ed: Yeah. I do have some sympathy with the idea that these platforms shouldn’t be and don’t want to be arbiters of truth. It’s not a fun job and it’s a good line. I think that’s why they trot it out so often. Of course we don’t want Mark Zuckerberg or Jack Dorsey being the arbiters of truth. Come on, right?

OL: You mentioned that there are a range of other tools that platforms have at their disposal aside from leaving up or taking down, would you mind just describing what that slate of actions might look like?

ED: Yeah, I really think we need to get out of this leave up/take down paradigm because platforms have so many more tools available at their disposal. They can do things like label things as having been fact-checked or manipulated in the context of manipulated media. They can reduce the amount of circulation that a piece of content is getting or how easy it is to share it or downrank it in the newsfeed or the algorithmic feed. They can also make architectural and structural design choices that can have huge impacts on the information ecosystem.

An example here is WhatsApp; in the context of the pandemic [it] has reduced how easy it is to forward messages. So instead of being able to afford it to multiple people at a time, you can only forward it once. This has reduced the circulation of certain kinds of content by 70%, which is an absolutely huge impact. That doesn’t involve being the arbiter of truth of the content in question, but it does drastically change the information environment. Those are the kinds of initiatives and tools that platforms have that I think we need to be talking about a lot more.

OL: Do you think there’s any use in platforms developing a unified protocol on takedowns at all?

ed: I think this is one of the most fascinating questions. I love this question and I’m obsessed with it and I don’t know the answer to it. When do we want uniform standards online and when do we want different marketplaces of ideas so to speak? I think you can see arguments for either. On the one hand, if you want standards, you want standards, and you want them uniformly across the information ecosystem.

There’s other factors in favor — like developing the tools to detect and identify manipulated media — is potentially extremely expensive and might be something that only the largest platforms have the resources to be able to do. If they do do that, why shouldn’t small platforms also have the benefits of that technology and use the same tools?

But on the other hand, free speech scholars get nervous when you start talking about compelled uniformity in speech standards, and maybe if we don’t know where to draw the line, why not have lots of people try drawing it in different places and see what works out best. This is something that I’ve been calling the “laboratories of online governance” approach to this problem.

Ultimately, I actually hope that we can find a middle ground. Like a good lawyer, I’m somewhere in between where we could have the resources and some sort of common set of standards, but some flexibility for platforms to adapt those to their unique affordances and their unique environment.

OL: In your opinion, is manipulated media, particularly political media, a form of protected political speech or just advertising?

ED: I think if you ask 10 different free speech scholars that question, you’re going to get 20 different answers. It will also be highly contextual and depend on the particular content in question, which is I think what really gets to the knob of the problem here, there’s no blanket answer to that question. It requires highly contextual judgments, which are often really subjective and impossible to scale, particularly at the scale of platforms.

Even if there were a straight out answer to that question, and I realize I’m trying to avoid letting you nail me down on that, how you apply it at scale. This is where I think that I do have some sympathy with the platforms because if you were to declare that manipulated media was disinformation in all cases, I think that would cover a surprising amount of political advertising on all sides. I don’t think that’s a partisan statement to make.

It’s also not clear, even if you did that, what the best way of dealing with it is. We’ve been talking about different measures here, where outright censorship may not be the best way of helping voters arrive at better answers and arrive at more information.

This conversation was part of the Berkman Klein Center’s new series, The Breakdown. The first season of the series is produced in collaboration with the Berkman Klein Center’s Assembly program, which for the 2019–2020 year is focusing on cybersecurity approaches to tackling disinformation.

For press inquiries, please email press@cyber.harvard.edu

--

--

Berkman Klein Center
Berkman Klein Center Collection

The Berkman Klein Center for Internet & Society at Harvard University was founded to explore cyberspace, share in its study, and help pioneer its development.