How will Facebook evaluate its measures to tackle fake news?

Adam Smith
The Economist Digital
4 min readJul 24, 2017
Credit: Sarah Marshall via Flickr

Is Facebook a media company? That’s not an interesting question.

What is Facebook doing to improve the news ecosystem? That’s a decent question, and Facebook’s answer is here, including measures to tackle fake news.

How will Facebook evaluate these measures? That’s a solid question—but too few people seem to be asking it.

So I’m asking the question, here. I’m not being facetious. I’m genuinely curious. I want the measures to work. So, just as we saw an open discussion about what measures Facebook could adopt to tackle fake news, I’d love to see a debate about how it could evaluate their effectiveness.

In fact, I’d expect Facebook to want to have this debate. Since January, Facebook has made a huge effort to open itself up to publishers and journalists. This initiative is welcome, smart and, incidentally, involves distributing free snacks to journalists.

As part of this initiative, editors and publishers have been invited to hackathons and talks. Facebook has developed specific features based on editors’ and publishers’ top priorities. The forthcoming launch of a paywall on editorial content within Facebook is just one exciting example (The Economist has been involved in developing this feature). Facebook has partnered with the Poynter Institute to develop a journalist training programme on best practices for reporters in social media. And then there’s the work on tackling fake news.

Facebook’s measures to tackle fake news have been active since December, but we haven’t heard a peep about how they will be evaluated. I have a couple of theories about why that might be. I’m interested in yours too, but here are mine:

  1. Facebook has some evaluation methods, but they currently show that the measures aren’t working. So they don’t want to talk about it yet until they’ve iterated and found a set of measures that do work. This would be sensible, but it’s annoying.
  2. Facebook has some evaluation methods, but they’re waiting to apply them until they’ve got enough data to chomp through. This would be a sensible approach, but why hold back on the methods right now?
  3. Facebook has some evaluation methods, but doesn’t think they’re robust enough yet. That’s fair enough. This is, after all, a complex problem and “fake news” is hard to define.
  4. Facebook doesn’t have any evaluation methods. This would be frustrating, and a big hole in the measures to tackle fake news.

In May the Guardian journalist @SamTLevin ran a superficial analysis of the fake news measures and found that:

articles formally debunked by Facebook’s fact-checking partners — including the Associated Press, Snopes, ABC News and PolitiFact — frequently remain on the site without the “disputed” tag warning users about the content. And when fake news stories do get branded as potentially false, the label often comes after the story has already gone viral and the damage has been done.

The headline writer for Sam’s piece about the programme wrote that “the evidence shows it’s not working”. I would say that Sam’s work, while a contribution to the debate, does not constitute strong enough “evidence” to say it’s “not working”. I don’t think we have enough evidence in the public domain to say anything about the measures yet. That’s why I’m interested in hearing from Facebook, or the fact-checking partners, to outline their evaluation method. Facebook responded to Sam’s findings by saying that “we have seen that a disputed flag does lead to a decrease in traffic and shares” — this suggests the company is evaluating.

So here are my further questions (add yours as a comment!):

  1. A post has to reach a certain number of users before it can be flagged enough times to trigger a check by the fact-checkers. How high is that threshold? Is it declining over time? If so, does that mean the fake news peddlers are being deterred?
  2. What share of the posts debunked by the fact-checkers remain on Facebook? Is this share declining? If so, does that mean the method is working?
  3. Facebook’s aim is to dislodge the financial incentive to make fake news. As Craig Silverman has reported, fake news peddlers have admitted that because they rely on Facebook, their next trick will be to build sites that evade Facebook’s defences. Does Facebook have any investigative data on the businesses and people behind fake news, and how they may be moving on to other opportunities?

Campbell Brown, head of news partnerships at Facebook, is the face of this programme. I’ve seen her talk about it at two events, on May 18th and July 18th. Both times I asked her for the evaluation methods and both times she didn’t answer; nor did she explain why she couldn’t. On July 18th she said: “It’s still early days on specific numbers. I can say how we’re approaching it” — and then went on to list the measures.

It’s a sensitive topic. But I think Facebook is being too tight-lipped. In June Facebook’s shareholders voted down a proposal for the company to be more transparent about fake news.

So, in the spirit of wanting Facebook’s measures to tackle fake news to succeed, I’ll ask again: how will Facebook evaluate them?

Adam Smith is deputy community editor at The Economist.

--

--

Adam Smith
The Economist Digital

Writer, talker, thinker and maker. Podcasting @ The Log Books and Karl’s Kaschemme.