Facebook can’t fix fake news

If a tree falls in the forest, and the New York Times runs a story on it, but Donald Trump tweets that there never was a tree, that the failing New York Times has it out for him, and Breitbart accuses the Times of a liberal bias, and fake news sites warn that the tree is a false flag, that the Democrats are running their child sex ring out of the stump, so journalists write a bunch of articles saying no, a tree fell, full stop, and neo-Nazis attack them on Twitter, did the tree actually fall? Sure, but by that point we’re all too scared, confused, angry or stressed out to care.

Welcome to media in 2016. Fake news metastasized across the internet in recent months, influencing the election and causing an armed man to show up at a D.C. pizzeria. Facebook has a cure. From Motherboard:

Facebook announced this [last] week it will make it easier for users to flag stories as fake news, and those stories will be send to third-party fact-checkers for review… Organizations such as Politifact, the Washington Post’ Fact Checker, the Associated Press, Snopes and a host of fact-checking media outlets internationally will receive the flagged posts. If the posts do not meet their standards of accuracy — for example, if a claim has no sourcing, or if it is based on another organization’s report that lacks sourcing — Facebook will mark it as “fake” and will display a warning message when someone tries to share it.

This measure won’t work in the long term. Maybe a small group of Facebook users will benefit, one that cares enough about facts to only share real news, but not enough to do a quick Google search about what they’re reading. But people will wonder why, if these stories are fake, they keep popping up in news feeds. There’s a point at which, confronted by angry red tags warning them not to share, users throw up their hands at either Facebook or the media.

Facebook won’t stick with this experiment if users leave. It’s a public company that sells ads to generate profit for shareholders, not an editorial body. It deals in supply and demand, not public service. We should also be skeptical of creating a center of power that gets to say what is true and can shut down voices at will. Leaning on big tech companies has hurt media in the past.

Media itself has only limited power to stop fake news, though. Readers of the New York Times, Washington Post and company will have the benefit of thorough fact checking, but fake news consumers don’t trust those outlets. Fake news has become a political issue. There’s evidence — both from how stories performed and from statements by people who make up news stories — that it helped Donald Trump win the election. Some false stories cast Hillary Clinton in a good light, but those haven’t been the center of attention. Since Clinton won’t be president next month, that makes sense. But seen from another perspective, all this scrutiny looks like an attempt to discredit conservative voices. Sites like Breitbart — a Trump-supporting website that has published false and misleading articles — spin fact-checking as bias, established media digs in deeper to discredit them, and readers of both sides shake their heads at the other. Nobody bridges the gap; institutional news sources can’t convince readers who tune them out.

So yes, at this moment, for users with a real interest in finding out what stories on Facebook are true, the company’s move will make a difference. But it won’t change any minds or stop the spread of lies online. Fake news outlets have an overt agenda, one that lines up with their readers’ viewpoint. As long as people refuse to scrutinize facts over emotion or political agenda, fake news will find an audience. No Facebook tool can stop that.