We Can Fix It: Saving the Truth from the Internet
The shot fired in “Pizzagate” is the latest problem created by a broken internet media system.
Just a few weeks ago we saw just how broken it was.
Eric Tucker must have been ecstatic. His tweet to 42 followers was going viral. He posted a picture of buses near an anti-Trump rally in Austin with a caption suggesting the rally was not as organic as it looked. It was growing to a viral meme as it was shared hundreds of thousands of times and published by media outlets. It was the dream of every social media denizen.
There was only one problem. It was a lie. As his tweet got more attention, journalists tracked down the bus owners and learned they were hired for a technology conference in town. To his credit, Eric did what he could to correct the problem after being confronted by the evidence. He posted a retraction on Twitter and blogged about it his decision, but it got hardly any attention — less than 30 retweets. Meanwhile the false story was retweeted, posted, and shared over 300,000 times. Think of that massive disparity and what it says about today’s internet media system.
A lie is 10,000 times more powerful than truth on the internet.
While Eric was conscientious, thousands of others are not. Fake news promoters are not interested in retracting a story. The internet we built enables their success. Social media platforms have little accountability for what is true, so lies to spread quickly and infect our public discourse. Retractions are ignored. Fact checkers are ineffectual. Headlines are purposefully misleading to lure us in like gullible fish: “clickbait.”
The problem goes deeper than fake news. The architecture and design of internet media is undermining our ability to know what is fact and what isn’t. We need to confront a more fundamental reality: The internet is killing truth.*
Our broken internet media threatens democracy and freedom. Without agreement on the facts, there is no dialogue or compromise. We lack a common understanding on which we can build community and national concensus. The recent election brought it into sharp focus with internet media feeding partisan divisions to an unprecedented fever pitch. The dangers go further than one election.
Terrorism and radicalism thrive because of our current media system. It is no coincidence that terrorism has grown and ISIS taken root at the same time that internet media has grown. It is easy to create a cultish cocoon of lies with today’s media system and terrorists exploit it to the hilt. As we are learning from the Comet Ping Pong situation, that cocoon extends to home grown conspiracy theories.
Powerful states have found tools of oppression in the internet we designed. The Russian government spread false information in the Ukraine and United States during elections. When there is no accountability for the truth, information is manipulated by the powerful for their own ends. The risk will grow as authoritarians exploit the weaknesses we built into the system.
The state of media today is similar to the spam crisis of the late 1990s. The integrity of email was at risk because of a flood of spam, most of which was fraudulent, malicious, or questionable. We solved the spam problem with a combination of technology, changes in social incentives, and laws. Today spam is only an annoyance because we have anti-spam technology like Brightmail**, informal rules that email senders like MailChimp use to limit abuse, and laws that allow legal action against the worst offenders. Like with spam, fixing the internet media problem will take a similar full assault of new technology, social incentives, and laws.
We broke it. We can fix it.
We created this mess. We are the ones who architected and built the web and social media. We are the ones who share, post, and tweet stories that seem to be true, or we would really like to be true.
We are the ones who can fix it. We can create the technology, re-work social incentives, and encourage the laws to create the internet we want. Imagine a future where content creators and aggregators are accountable for the validity of what they create and promote. It is a world where the people and the sources that tell the truth are valued and prosper and those who deceive suffer ignominy and failure.
How will we do it? It will take innovations in technology, social incentives, and our laws. Lets take a look at each one.
We can develop technology that allows content to be tagged with a truthfulness score. There are lots of ways this could be implemented and smart people are already working out schemes for it. I went a layer deeper in “More on Technology” at the end of this article and there is already a lot of work underway trying to develop solutions.
Imagine how Eric’s story might be different if we are successful. As a novice user of Twitter, Eric would have a medium or low truthfulness score — like a first-time ridesharing driver or eBay seller. His tweet would have his score attached to it and so would the image he attached. You can imagine a small logo similar to what Snopes and other fact checking sites already provide. If someone wanted to investigate the story or photo, they could click through to a URL that provided validation, for instance, that it was posted on Nov. 9 by Eric Tucker on Twitter, and that the photo was taken on Nov. 9 in Austin, Texas.
Now, let’s say, despite Eric’s middling truthfulness score, his post and photo go viral. Eric wakes up one morning and realizes he must retract his story. Eric goes to a control panel and labels his story “false” and this new system updates the truth score logo on the content to be “false” even if its been shared, published, and retweeted millions of times on different social media sites.
Lets say, fact checking sites like Snopes or Politifact have already published their own fact-checking of Eric’s post. This new capability would allow you, as a reader, to use Snopes to filter your own feed. In that case, you would have seen that story labeled as false even before Eric retracted it. Someone who cared about truth would be empowered and the same tools would allow fact checking sites to dampen the spread of fake news and headlines.
We have the technology to make this vision a reality but it won’t be trivial to deploy. This is a change to the infrastructure of the internet and requires support by a critical mass of cameras, social media, messaging, and information sharing systems. Because it is so vital to the future of the internet, this fix shouldn’t be controlled by just one company, it needs to be a de facto internet standard.
The incentives to publish false information are largely driven by competition for breaking news and for advertising and subscription dollars. Even the most brilliant technology solution won’t succeed unless we hack the social dynamics and economics that encourage false and dubious content. We need to incentivize the creators of ideas and the distributors of ideas to tell the truth and spread the truth. And we need to create penalties for spreading lies.
Change in social behavior starts with a passionate community of people who believe in the cause. We need people to stand up and say they are tired of this deeply flawed system and are ready to take action to create a new one. This community will be the vanguard of this new system, pushing for change in how internet media works.
We need a Truth Prize, an incentive for truthful aggregators and news publications. We can do this with a carrotmob — the inverse of a boycott. While boycotts punish sellers who behave badly, a carrotmob rewards good actors with more business. Let’s create a regular carrotmob award for the most truthful aggregators and news sources. The community can reward them with subscriptions and setting homepages and follows on social media. It would be a race to the top for news outlets and media platforms that act with integrity — adopt a truthfulness score, eliminate false stories and minimize clickbait.
Also, there is value in naming and shaming the worst actors, so we also need the Truth Rotten Tomato — awarded to a news outlet or other source that has the worst track record for creating or promoting fake news. It is a way to highlight those who lie and promote falsehoods in our public discourse.
We should consider a Truth Champion award of lots of money (more than $1M) to news creators who set the example for integrity. Like the Nobel Prize and the Academy Awards, this award would recognize the best and most honest individuals who live the highest standards and set the example for others.
Ultimately the power of the state will be required to help improve the integrity of content on the internet. We don’t want laws that limit speech, but we can insist on laws that extend libel to maliciously not retracting false stories. All speech is not protected; the classic example is falsely yelling “fire” in a crowded movie theater. It is an apt analogy for false information on the internet. You don’t have time to verify someone yelling “fire” and today’s internet is like a big huge movie theater. If you know a story you posted is false and repeat it or don’t retract it out of malice, you should be held accountable.
We should, however, tread lightly with legal action. First let’s do what we can with technology and social action. Eventually we will need a legal backstop to fix internet media, but right now we need to prevent lawmakers from acting prematurely.
What happened to Eric Tucker was wrong. He never stood a chance to do the right thing and fix his error. It could happen to any of us. We’ve built a system that keeps us doing harm to the truth and we are helpless to correct the lies.
It is our responsibility to our community and to the future users of the internet to fix this mess.
So it is time to lend a hand; time to raise a barn. We need to figure out how to create the standard new methods of authentication and ranking. We need help pushing for changes at large internet companies. We need to think through the details of how the truth and rotten tomato prizes would be awarded. We’ll need a budget and funding to create a strong incentive for the prizes. We need to promote these ideas and get more adoption. In short, if you found this message, we need you. If you agree that we need social action to make the internet safe for truth, promote this message and sign up to be part of this project.
When we look back on the history of internet media, this will be a critical phase. Is this just a milestone in the devolution of the internet into a morass of falsehood and balkanization, or is this the moment when we re-create an internet that lives up the the dream of what it could be.
As a first step, if you agree with these ideas and want to help make them reality, comment below and recruit friends and followers. Send me an email at firstname.lastname@example.org and lets figure out how to do this.
Let’s fix it.
*By “truth” I mean a commonly accepted understanding of what are facts and what is fiction. Sometimes facts and truths change as new evidence comes in and a good thing. With internet media opinion and false information can feel like facts because of social endorsement — your friends and other trusted sources repeat those feel-like facts. We need a mechanism to arrive at a consensus of what is true and what is false.
**Brightmail was a leader in anti-spam. I started it in 1998 and sold it to Symantec in 2004. I cite it simply as a widespread technology solution I happen know the best. To be clear, I am not looking to start a company, just rally our community to a cause that is bigger than any one company.
More on the Technology
A lot of work has already been done looking to brainstorm ideas, develop solutions for particular platforms like Facebook and Google, and leverage AI, leverage annotation or develop algorithms to detect fraudulent information. The platforms need to improve their own systems. Better detection is surely needed. My goal here is to just set forward draft requirements for technology solutions that solve the problem of storing and distributing a truthfulness score across platforms, regardless of how that score is created. This can’t and shouldn’t be owned by one company. It is a start toward the right set of technology solution. We need a broader community to join in.
We need a technology that reliably identifies the creator of content and have scores that indicate their truthfulness. These scores are easily visible. The data that validates the truth score is available at a click. This information is permanently attached to content (articles, photos, video, audio, and future data types). There are two pieces of technology, I believe, that are needed to make this a reality. For ease of discussion, I refer to them as “truthchain” and “truthrank.”
We need an authoritative, flexible, expandable, durable way to identify authorship, with voluntary anonymity. It can’t be something owned by one of the current platforms. This needs to be an independent, crowd sourced, open protocol that isn’t under the control of any one company or government.
- Authoritative: we need a system that is trusted by those that read it and use it — obviously. More subtly, we need a way for the truthfulness of content to be labeled consistently and quickly — faster than humans — so we avoid shit shows like fake news spreading faster and wider than corrections.
- Durable: we need the identifier to survive being copied & pasted, linked, and repeated, re-Tweeted,
- Flexible: it needs to adapt to any type of media — web page, text, image, video, audio, VR, AR and whatever we invent next
- Expandable: We need a way to store various metadata that establishes the truthfulness of content or an author
- Voluntary Anonymity: So much important content was revealed by anonymous sources and we need to preserve anonymizing the author while still allowing the story to be trackable and accountable
Blockchain provides the inspiration and perhaps the technology for this piece of the solution we can call this backbone technology — to authenticate content and carry metadata — “Truthchain.” There may be other ways to implement this system such as a directory approach like DNS. Blockchain has the advantage of allowing chain-of-control tracking, but its more complex to implement than a DNS inspired system. DRM might also be a solution or an inspiration for one. We need to evaluate tradeoffs and test them with implementations.
We need a way to rate the truthfulness of content and spread that score everywhere. There are already lots of organizations doing fact checking. The problem is not lack of fact checking, but that corrections don’t propagate as fast or as widely as the lies. We had a similar problem with spam in the 1990s. The big problem wasn’t identifying the spam, it was spreading anti-spam solutions faster than spam. Brightmail kicked the problem by speeding up detection and getting anti-spam detection integrated into ISPs and large email providers.
One entity should not become the arbiter of truth. We can load Truthrank scores within the meta data into the Truthchain. Many different scores could be recorded and consumers or distributors could decide which scores they want to follow. This system would allow consumers to set their own arbiters of valid facts. For example, if InfoWars wants to create their own fact checking operation and compete with Politifact, they could.
References linked above:
“Design Solutions for Fake News.” Google Docs. Accessed November 28, 2016. https://docs.google.com/document/d/1OPghC4ra6QLhaHhW8QvPJRMKGEXT7KaZtG_7s5-UQrw/edit#
“Facebook Faces the Truth Part 1; the Drag on Drones; Machine Employees in IT.” Us13.campaign-archive2.com. Accessed December 05, 2016. http://us13.campaign-archive2.com/?u=3750fa7c1a237029e03b19abe&id=2b198e63c8&e=f853d680eb
“Fake News — An Annotated Perspective — 2016 12 03.” Google Docs. Accessed December 05, 2016. https://docs.google.com/presentation/d/1VzIR9YzK7Pa90x_wW1lMl9Pdc21uQosD1L3jLfOFRdw/edit#slide=id.p
Jackson, Jasper. “Fake News Clampdown: Google Gives €150,000 to Fact-checking Projects.” The Guardian. November 17, 2016. Accessed December 05, 2016. https://www.theguardian.com/media/2016/nov/17/fake-news-google-funding-fact-checking-us-election
Kang, Cecilia and Goldman, Adam. “In Washington Pizzeria Attack, Fake News Brought Real Guns.” The New York Times. December 5, 2016. Accessed December 5, 2016. http://www.nytimes.com/2016/12/05/business/media/comet-ping-pong-pizza-shooting-fake-news-consequences.html
Maheshwari, Sapna. “How Fake News Goes Viral: A Case Study.” The New York Times. November 20, 2016. Accessed November 28, 2016. http://www.nytimes.com/2016/11/20/business/media/how-fake-news-spreads.html?_r=1
“My “Fake Protest” Claims and America’s Angry Division.” Eric Tucker. November 11, 2016. Accessed December 03, 2016. https://blog.erictucker.com/2016/11/11/my-fake-protest-claims-and-americas-angry-division/
“Russian Propaganda Effort Helped Spread ‘fake News’ during Election, Experts Say.” The Washington Post. Accessed November 29, 2016. https://www.washingtonpost.com/business/economy/russian-propaganda-effort-helped-spread-fake-news-during-election-experts-say/2016/11/24/793903b6-8a40-4ca9-b712-716af66098fe_story.html