Solving the Problem of Fake News
Social media is perfectly suited to spreading attractive lies, but it doesn’t have to be
It’s easy to say that fake news is a problem. But most of what you read about “fixing it” are facile and limited ideas. To really fix it, we have to understand all of the problem, recognize why fake news spreads, and really know what “fixing it” would really mean.
Social networks like Facebook thrive on interaction and sharing. For that to work, anyone has to be able to share anything. People share what catches their eye and reinforces their prejudices: the more outrageous, the better. Truth doesn’t enter into it. And as I’ll show, truth is a complicated thing to define.
Let me take you on a journey through the fake news ecosystem. I’ll show you the players and reveal why they behave the way they do. I’ll explain why the simple, obvious solutions don’t work. And I’ll finally show how to fix the problem, which is going to take some collaboration from the big Silicon Valley companies, turning their own users into part of the solution.
The rainbow of fake news
All of the following stories were popular at some point in the last few months. And they’re all at least partly fake.
Donald Trump chose Mexican drug fugitive “El Chapo” to head the Drug Enforcement Administration.” (The New Yorker/Borowitz Report)
Yoko Ono had an affair with Hillary Clinton in the 70s. (World News Daily Report)
Global temperatures dropped, destroying the justification for climate change. (Breitbart.com)
Sarah Palin asked people to boycott Mall of America after it hired a black Santa Claus. (Politicops/Newslo)
Maryland passed a law giving its presidential electoral votes to the winner of the national popular vote, regardless of how the voters in the state voted. (NBC news/AP)
Here’s the problem: The “El Chapo” story is satire. The Yoko Ono story is made up. The global warming story is true, but omits crucial facts that change the narrative. The Mall of America story starts off with a truth (there really was a black Santa), but the Sarah Palin part is made up. And the Maryland law was news in 2007 when it happened, but it’s not news now — and it doesn’t apply unless a bunch of other states agree.
Which of these deserves the “fake” label? They’re all in a grey area. And they all misled masses of people in 2016.
The danger of the troll equation
The First Amendment guarantees us free speech. It protects parody and satire. And while you can theoretically prosecute people who make up fake stories for libel, the burden of proof is high, the costs are expensive, and the sites, many of which are out of reach in other parts of the world, multiply like cockroaches.
If misleading and fake news is the match, social media is the gasoline. Social media sites are open to all, and they’re designed to make popular things spread. That’s a feature, not a flaw. And especially on sites like Tumblr, Twitter, and Reddit that don’t insist on people’s real names, trolls abound. Trolls are people who want to make trouble and don’t care about the truth.
That leads us to the fundamental equation of fake news:
Free speech + social media = a troll explosion
What does a troll explosion look like?
It’s sites like OccupyDemocrats.com and Drudge Report posting stories slanted blue or red, succeeding because those stories spread on Facebook.
It’s the flooding of lightly moderated Reddit by a Trump subgroup, who coordinate their efforts to move pro-Trump stories to the site’s home page, r/all.
It’s a proliferation of fake users of Twitter — one former employee described it as “a honeypot for a**holes” — who happily swarm and spread nastiness over anyone whose point of view opposes their own.
The problems have spread even to Google results. If you type “George Soros is” into the Google search box, it automatically suggests the most common search: “George Soros is dead.” (He isn’t.) A recent article in The Guardian explained how networks of misogynist sites, linking to each other, led to Google autocompleting “Are women” as “Are women evil,” with a featured search result implying that they are.
These false stories have real consequences. After reading a viral false story that Hillary Clinton was running a child abuse ring out of a pizza restaurant in Washington, D.C., a 28-year old named Edgar M. Welch went to check it out. He brought his guns and fired one of them, an AR-15 rifle, before police apprehended him.
Did fake news on Facebook influence the presidential election? Facebook CEO Mark Zuckerberg said that idea was “crazy” because “voters make decisions based on their lived experience.” But this was a close election. In the swing state of Wisconsin, 47.0% of voters voted for Hillary Clinton, while 47.8% picked Donald Trump. Is it that hard to believe that fake anti-Clinton news duped 0.8% of voters into picking Trump, Gary Johnson, or Jill Stein — or just staying home — and flipped the state to Trump? It only takes a few people failing to pay close attention to the sources of what they read to make the difference.
A study of 7,800 middle school, high school, and college students by Stanford’s Graduate School of Education showed a disturbing lack of ability to distinguish between mainstream news sites and fringe, unsupported content. “The kinds of duties that used to be the responsibility of editors, of librarians now fall on the shoulders of anyone who uses a screen to become informed about the world,” one of the study’s authors, Sam Wineburg, told NPR. And based on how fast fake news is spreading, hordes of people using those screens can’t tell the difference.
The cure will take a concerted effort
Fake news succeeds because it’s profitable. As Sam Mallikarjunan, a strategist at Boston-based sales and marketing company HubSpot, explains, “Fake news is a better business model [than real news]. That’s the core problem.”
Any proposed solution has to strike at the heart of that profitability.
That’s why the easy solutions to the fake news problem won’t work. For example, Melissa Zimdars, an assistant professor of communications and media at Merrimack College, tried to help with a list of questionable sites. But the problem is spreading way too fast for one person’s list — and we can’t just depend on one person’s judgment.
A charity in the UK raised 50,000 euros to create a tool to make fact-checking as easy as spell-checking. Snopes.com reliably identifies made up stories. These efforts help, but they’re outgunned — they can’t keep up with the flood of new falsehoods that the trolls are creating.
Facebook has proposed a solution. You can flag a story as “disupted,” and they’ll use unbiased “fact checkers” like Politifact to determine if the story is true. This is a good idea, but it designates a set of third-parties to audit the truth — and there is far more falsehood than these fact checkers can effectively police. It also ignores slanted news and satire.
The nation of Germany is even proposing to make spreading fake news punishable by a fine. But that’s going to a problem unless the state defines “fake” — which is a tricky thing to trust a nation to do.
There are no magic bullets. Remember the variety of different flavors and gradations of fake — from parody to slanted news to intentional falsehoods. No monolithic list can possibly hold back the multi-headed monster. Any static solution will get outsmarted; like virus creators, the fabricators of fake news will continually test the limits of any fake-news checker.
No, we need to build fake-news detectors into the fabric of the Net. And we need the giant Internet companies to all contribute to solving the problem.
Fake news hurts companies like Google, Facebook, Twitter, and every other site that allows social sharing. While they gain advertising revenue from these sites, they lose in reputation. Any legislative fix will be too clumsy and full of holes to stop the problem (did the CAN-SPAM Act stop email spam?).
These companies share an interest in making the Web more truthful, and fending off ham-handed regulation. So they should fund an ongoing, joint project to build a veracity standard for any link on the Net — and make it a utility that any site can use.
How would it work? Any participating site would include a button on every link. You, the user, could click on the button and mark the site as fake, parody, slanted, or outdated. The tools would then collate the votes and identify what category the link fits into. Sites with a slanted or misleading reputation (like Breitbart or World News Daily Report) would pop out quickly, but this method would also catch individual pages, like a parody blog post by one of the thousands of contributors on Huffington Post or Forbes.
Wouldn’t the troll explosion blow up this system as well? What’s to stop a bunch of trolls from labeling the Washington Post, or CNN as fake? Well, if you’re a new user to this system — or you’re reporting mainstream sites like CBS News as fake — the system will treat your vote as worth next to nothing. If, on the other hand, if you regularly report fake sites from all sides of the political spectrum, your vote will be worth more. The same types of social algorithms that allow Facebook to show liberal news to liberals and conservative news to conservatives will identify who’s a reliable flagger of fake sites, and who has an agenda.
Eventually, a large cadre of thoughtful voters will exert their influence over the rating system. It’s an automated version of the same system that now keeps Wikipedia relatively balanced — the best, most impartial editors get the most respect. With people rating content and the system rating people, you’ll approach a reliable way to mark content.
This utility will have two great effects.
First, it will help Facebook, Google, Apple, and any other tool to mark content based on how credible it is. You won’t have to instantly distinguish between the real NBC and knockoff fake news site nbc.com.co. Instead, the system will clue you in through color cues or icons. Eventually, these markers will be as much a part of the web as blue-underlined links are now.
And second, the same utility will be available to ad networks. Advertisers like Kellogg’s and Chrysler have found their ads distributed to sites like Breitbart and fake news sites, and been appalled. Right now any site can host ads from any ad network, and those ads might originate from anywhere. But with a marking system, responsible advertisers could just click a box to tell their ad networks not to show their ads on slanted or fake news sites.
This will finally choke off the cash supply to pernicious liars on the Internet. And it will help support legit sites — newspapers, magazines, TV networks, and even bloggers — on the right side of truth. Because that’s where the ad dollars will end up.
Josh Bernoff is the author of “Writing Without Bullshit: Boost Your Career by Saying What You Mean.” He blogs every weekday at withoutbullshit.com.