Three new ideas for mitigating disinformation
Assembly discussion explores new ways to mitigate challenges of disinformation
From intentionally false information circulating about the current global pandemic to information about the upcoming U.S. presidential election and the U.S. census, concerns about disinformation are rampant.
The Berkman Klein Center’s Assembly: Disinformation program virtually convened researchers to share their new ideas to address disinformation challenges with thirty of the program’s students, fellows, advisors, and experts. The discussion was moderated by Professor Jonathan Zittrain, faculty director of BKC and leader of the Assembly program. Zittrain was joined in conversation by Jill Lepore, the David Woods Kemper ’41 Professor of American History at Harvard University; Marshall Van Alstyne, the Questrom Professor in Management at Boston University; and Joan Donovan, the Director of the Technology and Social Change Research Project at the Shorenstein Center, an affiliate at the Berkman Klein Center, and an advisor of the Assembly program.
Professor Lepore kicked off the expert sessions with an overview of mass communication in the 20th century and a call to learn from previous attempts to regulate misleading political advertising on radio and television. She described early uses of television in political advertising in the 1950s, efforts to warn of the dangers of misleading ads, and attempts to regulate them — to no avail.
“There were efforts to and calls for regulations for a pause, for a de-escalation of an arms race, and they failed to succeed time and time again,” Lepore said. “And I think there are lessons to be learned about why they failed in those moments that might help us think about the better solutions that are offered today, and what the obstacles are in getting them in place.”
Professor Van Alstyne also addressed concerns related to the harms of misleading or false political advertising, from an information economics approach. He proposed an “honest ads guarantee,” or a bond that puts the burden of truth on ad content creators rather than on platforms. In this scheme, political campaigns would put up bonds. If the validity of their ads was challenged by an opposing campaign, independent fact-checkers would step in. Depending on the outcome of the challenge (the validity of the ad) the bond is either returned to the ad author(if the ad is true) or the bond goes to the challenger (if the ad is false).
“The question is, are you willing to stand by the claim that you’re actually making?” Van Alstyne explained. “It’s also the case that if it’s simply a monetary pledge, an honest ad guarantee, the politicians and democracies are perfectly free to lie if they wish. It just becomes more expensive.”
Jayshree Sarathy, a PhD student in computer science at Harvard and an Assembly student fellow asked whether platform users should bear some of the burden for sharing false information. Van Alstyne explained that engagement is a central component of social media platforms. “One of the ways I would propose to address that is to apply friction to liars and not just to lies,” Van Alstyne said, suggesting, as a hypothetical example, that authors of false content could have their followers taken away. “Your reputation, as in society, can be slowly earned and quickly lost. And I think that’s an effective mechanism in addition to some of these other mechanisms.”
Broadening the scope from advertising, Dr. Donovan emphasized the harm of unintentional strategic amplification of content by journalists and platforms and suggested that an influx of librarians could help to fix the current information infrastructure. Drawing on a recent paper published with danah boyd, Donovan explained that strategic amplification “shifts the burden of responsibility to the entire process of producing the news from reporting to distribution,” including journalists and news articles, “but also… we have to think about the role that platforms have come to play in distribution.”
Donovan provided a brief overview of ways journalists are trained to report responsibly and highlighted that “platform companies though don’t have this set of ethics; they don’t have a set of best or better practices to rely on.” She suggested that “instead of optimizing for engagement as the metric of quality, platforms can define success recommendations and helping newsfeeds as those maximizing respect, dignity, and other productive social values.”
While Facebook has previously touted hiring content moderators as a solution to improve the spread of dis- and misinformation, Donovan suggests instead hiring librarians to help platforms and organizations navigate and restructure their information infrastructure. Lepore added, however, that in the 20th century, journalists played a similar role to what Donovan is suggesting for librarians, noting that platforms intervening in news unraveled the work.
“We used to have 10,000 editors, members of editorial boards who made those decisions with respect to the idea of 10,000 librarians,” Lepore said. “There is an entire profession of people who have been trained, and whose job is to exercise exactly the kind of judgment that Joan is urging us to think about new ways to engineer. I think there’s a lot of expertise out there. And it’s the disdain on the part of Silicon Valley for kinds of knowledge that are exogenous to that culture that has got us to the place we’re in now.”
By convening students, professionals, and researchers from across disciplines, sectors, and backgrounds, such as through this virtual event, Assembly helps us better understand — and make progress on — the complex issues of disinformation.