7 (Real) Ways to Reform Online Misinformation
Platforms, governments, courts and users all need to be held accountable for harmful online lies — and that’s not easy
By Paula Klein
Why is it so tough to correct online misinformation? At a recent MIT IDE seminar, IDE visiting scholar Marshall Van Alstyne, also Professor of Information Economics and Department Chair at the Boston University, unpacked the challenges and offered ways to approach solutions. It’s not a one-and-done elixir.
Fake news and deliberate lies online are clearly dangerous threats to individuals and to democracy, he said. But attempts to fix the harm have largely failed because they go after the wrong focus. According to Van Alstyne, laws and online design tenets “have to move from the test of truth, to thinking about decision change and externality harms.”
In other words, we need to distinguish between falsehoods that aren’t harmful and those that cause decision errors.
It means that “everyone has the right to influence decisions that affect them up to the point they cause harm to others.”
Van Alstyne said it’s not the “fakeness” that matters as much as the implications and effects claims have on society and individuals. There’s also great peril in the distortion and misuse of true information — as when truth isn’t believed, or is used to mislead and manipulate people and their decisions, he said. For these reasons, he wants to put more onus on content creators to take responsibility for harms they cause.
Refocusing on Harm, Not Truth
If truth isn’t the core issue, then what is? Van Alstyne restates the problem this way: Misinformation on platforms causes external harm and market failures. The normal fix for market failure is government intervention but that can contradict the freedoms defined in the First Amendment of the U.S. constitution. As a result, attempts by courts to turn the problem over to the “market of ideas” don’t work because markets don’t self-correct their failures, he said.
Van Alstyne noted that U.S. courts have largely upheld news organizations’ rights to publish — even false information — as protected speech.
This precedent, coupled with the fact that regulation is generally opposed by platforms and private companies, means that combatting these powerful forces requires a combination of efforts that shift the focus and make it unprofitable to amplify harmful speech. It also requires decentralized governance so that no single party –government, platforms, or powerful individuals — decide the outcomes.
His discussion, Free Speech, Platforms and the Fake News Problem, described the tensions and the economic, social, political, legal and technical barriers that have to come down before real remedies can occur. [Read the full research paper here.]
Van Alstyne presented a seven-step approach to upholding truth that doesn’t interfere with free speech or lean toward over-regulation. But it’s not simple and no single item alone is a remedy. It requires buy-in and actions from platforms, governance from regulators and courts, and better understanding by platform designers as well as more informed decisions by consumers.
- Design platforms to be open and competitive. If platforms build-in transparency and public “counter-speech,” it could offset damage of those harmed.
- Platforms should act to reverse and uncover the amplification of liars, not just their lies. A liar’s online following is reduced when peers or impartial juries determine claims to be false.
- Separate original user posts from amplification and tax platform amplification of illegal or harmful content that is not protected speech (e.g. medical or electoral misinformation). This reduces amplification of harms while protecting the original post.
- Relax the total immunity of third-party content on platforms [Section 230]. Use sampling mechanisms to assign accountability/liability for harms. In this way, platforms are not responsible for every message they amplify but for a sufficiently large and harmful sample.
- Courts should move from a test of truth to a test decisions. Even though courts have allowed lying as protected speech, base judgment on false news’ impact on decisions in areas such as public health and violence.
- Governments should grant citizens “in situ” data rights. This allows them to import algorithms into the infrastructure where their data is resident. These data rights give users the power to choose their own curation, creating a true marketplace in ideas.
- Platforms or governments can create an “honest ads” market where anyone has the option of guaranteeing their claims. Only people who genuinely offer high quality will guarantee their claims; liars don’t want to guarantee their lies and truth becomes cheaper to produce than fiction.
Watch the seminar here, and access additional research papers:
Free Speech, Platforms & The Fake News Problem
Proposal: A Market for Truth to Address False Ads on Social Media;
Put Friction on Liars Not Just Their Lies (linkedin.com); The Price of Lies — Thinkers50