We should be tackling disinformation

And there are better ways than criminalizing speech

Will Rinehart
The Benchmark
5 min readFeb 6, 2020

--

Last week, presidential hopeful Senator Elizabeth Warren released a new plan meant to tackle voter disinformation campaigns online. The proposal includes laudable goals, like convening a summit of countries to enhance information sharing. It also makes critical missteps and misdiagnoses the problem of political disinformation. Platforms have been active in fighting disinformation, as both anecdotal and empirical evidence show. With this plan, Warren shows a serious disconnect from the reality on the ground and fails to recognize the difficulties in combating disinformation.

On her campaign website, Senator Elizabeth Warren explains why she has made this new push, saying, “The safety of our democracy is more important than shareholder dividends and CEO salaries, and we need tech companies to behave accordingly. That’s why I’m calling on them to take real steps right now to fight disinformation.”

Warren wants to paint a simple world where disinformation is the sole determinant of a democracy’s quality. But disinformation is situated within broader trends of social media use, political engagement, political polarization, and politicians’ behavior, to name a few. Disinformation isn’t easily pinned down either. In a recent review of scholarly literature, a group of eight political scientists, sociologists, and economists pointed out that disinformation is a broad category “of information that one could encounter online that could possibly lead to misperceptions about the actual state of the world.” Because it is such an amorphous concept that is connected with other political trends, the authors conclude that “we do not currently fully understand all these factors or their relationships to each other.” Legislating when so little is known about the topic could spell disaster.

Warren’s plan includes eleven steps, but the most important parts of the plan would:

  • “Create civil and criminal penalties for knowingly disseminating false information about when and how to vote in U.S. elections;”
  • “Work with other platforms and government to share information and resources;”
  • “Take meaningful steps to alert users affected by disinformation campaigns;” and
  • “Share information about algorithms and allow users to opt-out of algorithmic amplification.”

Most reporting on her plan has homed in on the first of these points, which would establish civil and criminal penalties for knowingly sharing false information about voting times and places. This idea isn’t new. In fact, it already has a vehicle in the Senate, the Deceptive Practices and Voter Intimidation Prevention Act of 2019, which Warren has not cosponsored.

What’s more, tech companies are already actively working to combat disinformation of this type. Both Google and Facebook already ban ads that contain this kind of false information, while Twitter has barred all political ads on their site. In practice, then, it is unclear what would be achieved with this kind of ban that isn’t already being done.

Moreover, it is an open question whether these restrictions would even be constitutional due to the precedent set by United States v. Alvarez. In this case, the Supreme Court ruled on the constitutionality of the Stolen Valor Act, which made it a crime to claim military decorations or accommodation medals falsely. The Court affirmed that some categories of speech like defamation and true threats present a grave and imminent threat, but false statements alone don’t suffice this standard. Moreover, the Court said that Congress drafted the Stolen Valor Act too broadly, and criminal punishment was a step too far. For all of these reasons, the Act was struck down. A bill doling out criminal and civil penalties for election disinformation would face similar constitutional hurdles.

Warren also misfires in suggesting that, “Tech companies are trying to assure the public they have changed. But their efforts are no more than nibbles around the edges.” Much has changed since the 2016 election. For one, the major platforms are actively engaged in combating disinformation and giving users tools to better flag problems. Ensuring election integrity is a concern that goes all the way up to the C-Suite of social media companies. Twitter went so far as to ban political ads altogether, a dramatic shift in policy that has riled up the industry. For their part, Google no longer allows political ads to be targeted at narrowly defined audiences.

Second, the major platforms now actively work with each other and the larger community to share information and resources about disinformation campaigns. Teams at Google, Facebook, and Twitter often communicate amongst themselves about these threats. Google’s Threat Analysis Group, for example, monitors this space and distributes its findings with the intelligence community. This team even released a report last year detailing their efforts to thwart disinformation and other spam. Groups within Facebook and Twitter have been just as active in stopping disinformation. Wired magazine even praised Facebook for their work, saying that the company has learned from the mistakes of 2016. Social media platforms are working alongside nonprofits, government agencies, and the academy to tackle disinformation.

Empirical research further contradicts Warren’s portrayal of what’s going on in the tech world. Just last year, economists Hunt Allcott, Matthew Gentzkow, and Chuan Yu measured the diffusion of content from 569 fake news websites and 9,540 fake news stories on Facebook and Twitter between January 2015 and July 2018. In the runup to the 2016 election, users did interact with increasingly more false content. After revelations of election meddling became public, the companies clamped down, and interactions with false content fell sharply on Facebook but continued to rise on Twitter. The authors summarized the findings by noting that, “the relative magnitude of the misinformation problem on Facebook has declined since its peak [in 2016].”

Research also undercuts Warren’s notion that platforms should constantly alert users to disinformation campaigns. In some cases, adding flags and warning labels had minimal effects on how users perceive the verity of an article, whereas in other cases, there was no effect at all. The direction that a flag pushes a user depends heavily on their demographics and political ideology. Pennycook and Rand, for example, have documented what they dubbed “implied truth effect,” where articles without warnings were “seen as more accurate than in the control,” even if they aren’t accurate and should be flagged.

Still, Warren wants to go further by altering the fundamental programs at the heart of the platform economy: “Social media platforms should allow users to understand how algorithms affect their use of the platform, and to opt-out of algorithmic amplification.” In the broadest interpretation, this proposal would mean that platforms must give users the option to choose a service without algorithms. Granting users the ability to opt-out of algorithms would effectively break Google’s search engine, Facebook’s News Feed, and Twitter’s stream. A narrower interpretation would mean that companies would have to realign their algorithms to optimize for something other than engagement. While the platforms have admitted that aligning algorithms to optimize for engagement creates problems, no consensus has emerged on what kind of rules should replace it.

Adding an additional layer of regulation would force companies to shift their efforts away from finding innovative ways to deal with disinformation and towards compliance. Faced with steep criminal and civil penalties, platforms would respond by taking down more content and shutting down various avenues to political speech. Newspapers, political organizations, nonprofits, and consumers would see their outlets shrink. The end result of Warren’s plan wouldn’t be a reduction of disinformation, but a reduction of all kinds of political speech, including her own.

--

--

The Benchmark
The Benchmark

Published in The Benchmark

A publication by The Center for Growth and Opportunity at Utah State University

Will Rinehart
Will Rinehart

Written by Will Rinehart

Senior Research Fellow | Center for Growth and Opportunity | @WillRinehart