Facebook Pins a Scarlet Letter to Fake News

New tools and policies take on the News Feed’s worst offenders. But our truth problems are bigger than Facebook.

Steven Levy
Backchannel
11 min readDec 15, 2016

--

(David Ramos / Getty Images)

“We believe in giving people a voice — that’s part of our vision of the company and why we do what we do. But also that we have a responsibility to reduce the threat of fake news on Facebook and our platform.”

That’s how Adam Mosseri, Facebook’s VP of Product Management for News Feed, frames the appearance of new tools and policies that will begin to appear on the platform today. This is the company’s first significant response to the orgy of finger pointing that ensued after the recent election, when Facebook found itself tagged as a key cause of the result: Because its News Feed was a compliant host for phony pro-Trump items posing as legitimate news articles, it’s been accused of helping usher in an administration clinging to a loose association with the truth.

The intensity of the outcry took Facebook by surprise. Two days after the election, Mark Zuckerberg commented in a public forum that the charge that fake news on Facebook influenced the election was “a pretty crazy idea.”

He also said that Facebook’s next move would be guided by what people wanted from the social network: “We really believe in people, and you don’t generally go wrong when you trust that people understand what they care about and what’s important to them — and you build systems that reflect that.”

Judging from the reaction post-election, “the people” — or at least people who express themselves publicly — didn’t think that the concept of fake news helping to elect Trump was crazy at all. At the very least, they considered fake news a blight and blamed Facebook’s News Feed for circulating it so widely. Nuances such as whether seeing those articles actually affected the vote became irrelevant (has anyone come forward to say that, now that they have learned Hillary Clinton did not in fact molest children in a pizzeria, they have would have otherwise pulled a different lever?), because it is an obvious embarrassment to the platform when stories concocted in a Macedonian basement charging a candidate with selling weapons to ISIS are more popular than the best efforts of top news organizations.

Facebook’s well-honed skills to encourage sharing weren’t intended to wreak havoc on elections, but the News Feed’s design proved an ideal playing field for fake newsters. On News Feed, a shared link from the New York Times looks just as substantial as one from “Ending the Fed” or the “Denver Guardian,” the latter of which is an authentic sounding publication that doesn’t actually exist. Sometimes fake news items even spoof the domains of real news sites (with a .co instead of .com) to trick people into thinking that an actual newsroom produced these alt-right Black Mirror fantasies.

For the past few weeks, the pressure on Facebook to fix the problem has been tremendous, and its policymakers and engineers have been working feverishly to do something about it — regardless of whether Facebookers think fake news influenced the election. Like it or not, Facebook now owns the problem; in any case, it’s become pretty clear that if fake news keeps proliferating on the News Feed, many people will eventually be turned off by it—undercutting Facebook’s ultimate goal of encouraging people to engage with the valued content populating their News Feeds. According to Mosseri, fake news itself produces very little revenue for Facebook, but Facebook wielding a heavy hand when it comes to News Feed content is a real threat to both its popularity and, ultimately, its business. Facebook will have to thread a tight needle, though: it does not want to become the ultimate judge of legitimate news, and it doesn’t want to stifle sharing and opining among its users.

That’s why, as Mosseri explains it, Facebook will be focusing its efforts on the most egregious offenders: fake news stories that are intentionally misleading—the ones that knowingly report falsified events, especially when the publisher is deceptively posing as an actual news organization. Mosseri describes these as “clear black-and-white hoaxes, the bottom of the barrel, worst of the worst part of the fake news.” In no way does the company want to get involved, Mosseri emphasizes, in matters of opinion, or determining what is or is not a legitimate news source.

According to Mosseri, Facebook is taking the following steps to target the “worst of the worst.”

Users can more easily flag potential fake news. Facebook is modifying its reporting tools to enable users to instantly report flagrantly fake news. Hitting a tab on the upper right of a post calls up a menu that, in addition to the previous choices that let users flag items as spam, hate speech or “not interesting,” now includes an option to report the story as fake news. (Previously there was an option to complain about a “false news story” but it took some clicking to get to it.)

A process to tag fake news with a warning label. Facebook has made arrangements with a network of fact-checking organizations. The organizations will vet stories that surface through user reports and indications that Facebook’s algorithm will sniff out. If the organizations — which themselves are identified by the non-profit Poynter Institute as signatories of its International Fact Checking Code of Principles — dispute the claims in the story, Facebook will label it as “disputed,” and put a “flag” on it that links to the fact checker’s explanation. The fact checkers involved in this initial part of the program are Snopes, Politifact, Factcheck.org, and ABC News’ fact-checking initiative. Mosseri says those organizations are taking on this task as part of their mission, and Facebook isn’t paying them.

Facebook’s new Scarlet Letter: D for disputed.

I am calling this the Scarlet Letter approach, with a nod to Nathaniel Hawthorne. Instead of “A” for adultery as in the eponymous novel, Facebook is slapping a virtual “D” for disputed on those stories. It’s kind of ingenious. By outsourcing the actual decision making to third parties that are used to making such calls, Facebook dodges the mantle of ultimate arbiter of truth. Instead of stoning these Hester Prynnes of the News Feed, Facebook lets those stories populate the stream, and even allows people to share them (when they do try to share those stories, a pop-up box will warn that the story is disputed)—but the Scarlet Letter goes with them. Facebook won’t let people make ads or promote those disputed stories.

Draining profits from fake news. “As you look into it more, fake news turns out to be financially motivated, not ideologically motivated,” says Mosseri. So Facebook is taking measures to prevent fakesters from drawing clicks by posting crazy stories that seem like they’re coming from legitimate publications. It’s ending the ability to spoof domains, so you can no longer it make it look like the post you cooked up in a Balkan coffee shop came from the Washington Post’s website. (Which, yes, people could do until now.) Also, Facebook will analyze publisher sites for misleading domain names, low follower counts, or other signs that they may not be established news organizations. Presumably those offenders will be banned from Facebook.

Showing less fake news on people’s feeds. Though this measure will not be visible like the Scarlet Letters, ultimately this may have the biggest impact on stopping the worst of fake news. When a typical user logs into Facebook, the company has a choice of well over 2,000 “stories” to display. The ones he or she sees are determined by a number of factors, known as signals. These include things like who is sharing news (if it’s a close friend, you’re more likely to see it), when a story is shared (fresh is better than old), and how viral the story is (if lots of people are engaging with the story, Facebook figures you’d like to see it, too.) The News Feed algorithm is an epically complicated formula that takes into account all those signals and their weights and ranks each story accordingly.

Mosseri explained to me that Facebook will now rank fake news lower, starting with those “disputed” stories. Facebook will regard a Scarlet Letter as a negative signal. Since the typical user might view only 200 or so stories out of the possible 2,000 in a given day, a very strong negative signal has the potential to bury something shared by a friend, placing it so low in the stack that you’ll never see it. How much effect will this have? Mosseri says that “it’s a big signal,” meaning that its weight will definitely mean that a disputed story will be displayed to many fewer people than would otherwise be exposed to it. But he specified that this won’t mean that those stories would be totally buried — especially in cases where they go viral, their algorithmic scores will be high enough to be seen by plenty of people. With, of course, that Scarlet Letter sewn to it.

Furthermore, Facebook is changing its ranking to stifle potential fake news that doesn’t go through the fact-checking process. It will look for behavior typical of fake news — such as articles that get read a lot, but not shared afterwards. (Facebook is calling this “informed sharing.”) That might indicate that people feel tricked by the article and thus it is fake. Ultimately, I suspect that Facebook’s algorithm will include a lot of pattern recognition that will stifle circulation of obvious fake news — and perhaps unintentionally nail some non-fake stories as well. (Facebook contends that its algorithm should be sufficiently robust to prevent false positives.)

How effective will these measures be? The numbers indicate that they might indeed make a difference in informing people who might otherwise be confusing this stuff with professionally reported stories. According to the terrific reporting of Buzzfeed’s Craig Silverman, a relatively small number of fake news stories have astronomical numbers, with people sharing those bogus items more than the most popular stories of the New York Times or Washington Post. Those are exactly the bottom-of-the-barrel offenders that Facebook is targeting here.

On the other hand, I can imagine some unintended results. Trolls will undoubtedly take advantage of Facebook’s tools to flag any stories they disagree with, attempting to create a denial-of-service onslaught for Snopes and the other fact checkers. (Presumably, Facebook’s algorithm can stop those efforts by recognizing that the sites they complain about have real news.) Or maybe the people who love to share those fake news stories will regard the Scarlet Letters as the creations of a liberal tech industry: Instead of shunning those stories, they may share them even more.

Some recipients of Scarlet Letters may object to the decisions of fact checkers. Mosseri says that if that happens, Facebook will have them appeal to the organization that disputed the story. Mosseri indicates that this would be unlikely because the bar would be high: “We are being very, very conservative, especially to start,” he says.

But Facebook may well have trouble limiting this process only to totally bogus fake news factories — the outright “hoaxes” it is targeting in this initial foray. The stories flagged by well-intentioned Facebook users will inevitably range beyond those from made-up publications to include intentionally untruthful posts from places with actual newsrooms, outlets that have not always been a faithful adherents of fact-based journalism. (I’m not talking about misguided reporting, and certainly not about opinions, but posts that knowingly disregard or misstate facts to promote a false narrative.) Facebook says that the algorithm that passes stories on to the fact checkers is optimized for those bottom-of-the-barrel offenders. But it says that it’s possible that a post from an actual news organization, or some semblance of one, might get passed on. (Consider the flagrantly deceptive piece on climate change, exposed by the Weather Channel — surely Breitbart’s original story is worth some scarlet?) In any case, if an out-and-out lie from a publication like Breitbart gets flagged by multiple users with the “fake news” menu, wouldn’t it be incumbent on Facebook to pass it on to the fact checkers?

Things could get sticky. Those fact-checking organizations are used to calling out falsehoods with multiple Pinocchios or pants-on-fires. But if such a story gets a Scarlet Letter, perhaps well-funded ultra-conservatives may help the offending news outlet sue the fact-checkers, and Facebook itself. (Good news for Facebook: Peter Thiel is on its board, and probably won’t fund those cases!)

Lies are lies, whether circulated by teenagers in eastern Europe or Fox News or Breitbart. Or Donald Trump. Ultimately, even if it’s successful in scraping the bottom of the barrel, Facebook may wind up still mired in fake news issues, only a little higher up the media food chain. By including outside vetters in its process, Facebook will leave itself open to questions about why all news-based posts aren’t subject to fact checking.

I understand that this is a lot for Facebook to take on, especially since the core problem creating fake news is not Facebook’s News Feed, but a collection of other factors. These include the internet’s tendency to disaggregate news from its once-trusted sources, which allows people to lean-in to the tendency to read stuff they agree with. But perhaps the most alarming development originates from the executive office: a conscious, self-serving attack on truth itself, culminating in a president who had a surrogate proclaim, “There’s no such thing, unfortunately, anymore, of facts.”

It’s no surprise that the front lines on the assault on truth have turned out to be the previously unvetted stream of the Facebook News Feed. Fake news isn’t Mark Zuckerberg’s fault, but it is his problem; as the world’s most popular arena for news, Facebook can’t stand by and let itself become the instrument that allows big lies and propaganda to trample reason and fact. So consider its announcement today the beginning of what will be a long and difficult process to promote an ecology of openly shared information while mitigating toxic untruths.

Let’s see how much Scarlet Letters help in this.

--

--

Steven Levy
Backchannel

Writing for Wired, Used to edit Backchannel here. Just wrote Facebook: The Inside Story.