The Breakdown: Jonathan Zittrain reflects on 2019–2020 Assembly program, disinformation

Special two-part episode delves into BKC’s Assembly program, big challenges with disinformation

Berkman Klein Center
Berkman Klein Center Collection
10 min readJul 30, 2020

--

Jonathan Zittrain joins Oumou Ly for the latest episode of The Breakdown. Photo: Lydia Rosenberg

This episode of The Breakdown featuring Assembly Staff Fellow Oumou Ly in conversation with Professor Jonathan Zittrain, is shared in two parts. Part one delves into the Berkman Klein Center’s Assembly program — which focused on disinformation from a cybersecurity perspective for the 2019–2020 year — and some of the big challenges that surfaced from Assembly work this year. Part two explores disinformation in the context of trust and platforms, and looks ahead at Assembly in the future.

Watch Part One of The Breakdown with Jonathan Zittrain

Read the transcript, which has been lightly edited for clarity and brevity.

Oumou Ly (OL): Welcome to The Breakdown. My name is Oumou. I’m a staff fellow on the Berkman Klein Center Assembly: Disinformation Program. Our episode today features our very own Jonathan Zittrain. Jonathan is the George Bemis Professor of International Law at Harvard Law School. He’s also a Professor at the Harvard Kennedy School, a Professor of Computer Science at the School of Engineering and Applied Sciences, Director of the Law School Library and Co-founder and Director of the Berkman Klein Center for Internet & Society. Thank you for joining us today, Jonathan.

Jonathan Zittrain (JZ): It’s my pleasure. Thank you, Oumou.

OL: So our Assembly Program is wrapping up for the 2019 through 2020 year and you’ve joined us in your capacity as faculty advisor for the Assembly Program and also as a Co-founder and Director of the Berkman Klein Center of which the Assembly Program is based.

Can you talk a little bit about yourself, a little bit about the Assembly Program and how it came to be?

JZ: At one point we had gotten word of one of our fellow universities getting on rather abrupt notice a $15 million grant to improve the state of cybersecurity. We were certainly thrilled for our peers and then couldn’t help but brainstorm, gosh, if we unasked had $15 million appear, which I won’t say has happened yet, what would we do with it and how would we deploy it in a way that did justice to the confidence of whoever would be entrusting us with that much money?

JZ: What emerged from that discussion was a sense that in some ways the reach of academia is limited. But what if people who weren’t dispositionally inclined to sit down and write 250 manuscript pages could be brought together around these really hard problems that transcend traditional disciplinary boundaries?

What would it look like to gather people around hard problems and work on them? Our first efforts were on cybersecurity and specifically on what we call the going dark problem as framed by law enforcement.

Our group, which included government officials, civil libertarians, academics, and human rights folks, had really good discussions about that and put a report entitled Don’t Panic. In a way, it said to law enforcement, don’t panic and to the civil libertarians, maybe you should panic because there are a bunch of fronts on which to worry. That’s just an example of the sorts of things our group came together to do in that instance and the intervening years. Most recently, as you know, we’ve taken up the problem of disinformation. How big is it, how bad is it, how would we measure it and know if it’s getting better or worse and who, if anyone, would we trust with an intervention designed to do something about it?

I should say quickly, the Assembly Program as it’s evolved has roughly now three pillars, three tracks, one of which involves our students at the university. Graduate students looking for thesis topics across multiple departments, including law students looking for meaningful applied experiential work, lend their talents to projects, come together as a cohort to do independent work, and meet faculty from other departments that they normally wouldn’t have a chance to come across. That’s the Assembly student fellowship. We also have Assembly fellows who are people from industry, nonprofits, NGOs, and outside academia.

It doesn’t mean they’re running a particular company, but they are the people within the engineering rooms of those companies trying to make a difference. By calling them together, having them spend some time on campus here full time, and then scatter again,that can maybe yield something interesting. That was the premise. For now several years, our Assembly fellows have bonded as a group, done multiple projects and presented those projects.

Then the third pillar is what we call the Assembly Forum. We convene senior officials and senior executives or their representatives and get them talking with one another in the kind of setting that they wouldn’t get in their own natural environment. These are people that might well be thinking about this kind of stuff all the time, and are trying to see things from a new angle. Those are the three pieces of Assembly.

OL: Were there any issues that we discussed over the course of the year on which you experienced a perspective shift, had your mind changed? Or, did you perhaps change someone else’s mind?

JZ: I certainly found my own thinking deepened and changed on some things. First off, you certainly can’t just assume that disinformation is a scourge or undifferentiated. Across the board, some of the slicing and dicing that academics want to do and that we found some of the companies are doing too, is them trying to create measurements to counter it where they want to weigh in. It really makes a difference to figure out, “Well, all right. What are we defining as misinformation?” Even… I mean, to some listeners, this may be a kind of new distinction to everybody who’s new at one point, the difference between misinformation and disinformation.

OL: Absolutely.

JZ: Misinformation being, “Oh, you just got it wrong”, and disinformation being, “You are wrong. You’re trying to get other people to get it wrong”, with the latter being propaganda. Even that isn’t sufficient because you would think that if some government cooks up a piece of disinformation in a lab and releases it, that is the disinformation. But if somebody repeats it too credulously, they really believe it themselves, they’re engaging in misinformation with the disinformation they got. It might well be that if you’re a platform conveying that or amplifying that speech, you would react to it differently if you know the actor is intending it versus the actor just being a credulous vehicle for it. So being more careful and precise so that we can cut to action that more narrowly addresses the worst aspects of the problem, seems to me really useful in a way that just otherwise makes the problem feel so inchoate and overwhelming that it’s hard to even start with your spoons scooping out the ocean.

I think that in the particular instance of political mis- and disinformation, there are interesting questions. If you have a platform like Facebook, or you have a government intelligence agency that’s charged with protecting the nation, looking for threats and they see another government trying to salt falsehoods and trying to make those falsehoods are coming from fellow Americans — now what? You would think, well, at least you should disclose what you see. Certainly if I’m on Facebook, I would prefer that if I saw something that was supposedly from a neighbor, and it turns out it’s from somebody thousands of miles away getting paid by their government to pull my leg, I should know about that.

OL: Right.

JZ: But it’s very complicated. One of the hypotheticals we entertained as a group was, all right. Suppose the government, the US government, absolutely with great certainty can say, “Here’s disinformation. This is coming from this other country. It’s targeting this political candidate.” Do you tell the candidate? If you tell the candidate, what do you tell them? By the way, it’s like another country has it in for you? Should that be all? Do you say, “Here are the specific posts”, and then do you tell them, “By the way, it’s classified, so you can’t tell anyone else?”

JZ: If not, why did you tell them? What are they supposed to do with it? Those are real questions and I’m not sure I have answers to them all. But we should be thinking about situations wherein some of us know what’s going on and are prepared to share it, or have an inkling and aren’t certain — and what we would advise the disclosees to do about it. What’s the right course of action that advances the cause against disinformation? That seems to me a better articulated question than I had when I was going into it.

Watch Part Two of the Breakdown from the Berkman Klein Center.

OL: What concerns you most about the current state of play with regard to disinformation? Is it that the problems are so intractable that we find ourselves at a status quo that seems untenable? What really keeps you up at night?

JZ: What keeps me up at night is the absence of trust in any referee. Just to take an example from the foundations of a legal system: Two people have a dispute so intractable and important to them that they are willing to endure litigation… they’re ready to go into a courthouse and spend potentially years and tens of thousands of dollars trying to just get an answer from a jury or a judge and an appellate court.

OL: Yeah.

JZ: It would be nice to know that at the end of that, when somebody wins and somebody loses, although the loser will be disappointed, they don’t feel it’s the case that they were robbed. We don’t want it to be the case that they wonder, Why did I even have the faith to go into that courthouse?” It is valuable to have a legal system that can settle disputes without the system itself being rightly called into question in every case as to whether it is the problem.

The fact that we don’t have a significant majority of people trusting anything is a huge problem. There’s a necessity to create more trust and buy in among us. My thought was to have political ads submitted to a platform, then assigned to an American high school class, which under the guidance of their teacher and maybe a school librarian work through whether this ad contains disinformation or misinformation. The class determines whether the ad remains on the platform, they write up their findings, and get graded as to how well they do it. Maybe the decision favored by two out of three classes is the final decision. That’s an example of an idea that I acknowledge is clearly crazy. But I’m hard pressed when I think about it to say why it’s worse than the status quo, which is clearly unacceptable to me.

OL: Do you think that this lack of trust in traditionally trusted institutions is the result of the disinformation situation that we’re in? Or do you think that there was the sentiment that preceded it and this has just exacerbated it? I talked with Renee DiResta for our first episode of the series and she said something interesting to me, which is that social media had this democratizing effect in terms of who we consider to be a credible source. At the same time, we’re experiencing disinformation that degrades the credibility of traditionally respected sources. Where do you think that this has come from?

JZ: It’s likely a sadly mutual cycle. An example is this tale about how 5G relates to COVID. Anybody could sit down and write a page of word salad that invokes a bunch of words having to do with physics to explain how the vibrations actually change the vibrations of the… It’s incoherent. But, the fact that it could have purchase begs the question — all you needed was to have your eyes encounter those words?

That’s partly the worry about deep fakes. That you see something, you feel like your eyes aren’t lying and you’re already inclined to accept it for various reasons, including just wanting to rationalize what you may already believe or want to have happen in the world.

OL: Our forum wrapped on May 12th and we had our last two sessions were really heavily focused on COVID. Of course, it’s topical as so much of what we’re seeing online is COVID related or COVID focused. In our last two sessions, platforms, researchers and others in our group talk[ed] about the challenges that they’ve encountered as they work to manage the sheer volume of disinformation surrounding this issue. Then just recently, sustained attention has really shifted to issues of racial inequity, injustice and police brutality.

As you take stock of the challenges that are mounting in the world at large, and maybe amongst the countering disinformation community as well, are there particular reforms that you hope to see?

JZ: Well, I think part of the throughline of the examples you’re talking about is disinformation that could contribute to violence or to harm, including self harm in the health context.

OL: Yeah.

JZ: It makes the stakes real. When it’s disinformation that could lead to violence and conflict and purveyors of disinformation are putting it out exactly for that purpose, it makes it awfully hard to just say this is just too thorny a problem.

So while acknowledging all of the difficulties that come from figuring out who’s supposed to be the truth police here, having no police here is also the stakes are very real, very immediate. When the denominator of people involved is in the billions and you know that slight tweak to the platform here could greatly change the views of tens of millions of people, it is not a non neutral position. There’s just whether you’re going to be stirring the pot or whether third parties, including state actors will be stirring the pot.

OL: I completely agree with you. What is on tap for next year?

JZ: We’ve taken up other issues like cybersecurity and the ethics and governance of AI. This problem of disinformation requires calls for more than just the one academic year’s worth of focused attention. There’s a lot of momentum and I think enough collective feeling within the various groups that the status quo really isn’t working, and that it’s worth pressing on to solve it other than just keeping on with some of the measures already in place. It’s really calling out for new thinking and new experiments.

I’m also mindful that a lot of the action here, both in understanding the dimension of the problem through access to data and in implementing attempted solutions — that’s largely in private hands. Figuring out the right way to bridge those private companies that happen to shape speech so much with the public interest is a really important role that our group can play and model and work with for the coming year.

So my sensibility is that we’ll certainly continue through the November US elections, but even beyond with the relationships that have been forged among us and we’ll see if we can bring more to the table as we go.

OL: Thanks so much for joining me today, Jonathan.

JZ: It’s my pleasure. Thank you, Oumou.

--

--

Berkman Klein Center
Berkman Klein Center Collection

The Berkman Klein Center for Internet & Society at Harvard University was founded to explore cyberspace, share in its study, and help pioneer its development.