A Word with Farhad Manjoo: How to Counter Fake News

One of our best technology writers, Farhad Manjoo, has noted all of the inherent challenges associated with any effort Facebook would undertake to prevent the spread of fake news. Below I post his observations (originally shared in Facebook) in full. Farhad’s thoughts are worth reading entirely. There are several bolded sections, which I will use as the cues to my response to after Farhad’s text.

Farhad is completely accurate about the Sisyphean task before us, but much too defeatist in his implicit assertion — which is: because a problem cannot be addressed perfectly, it is not worth trying to fix at all. Yes, the Web will continue to be a cesspool of ridiculous and pathetic “news.” This does not mean we should make no efforts whatsoever to prevent its spread.


FM: “Talk me through how Facebook could limit the spread of fake news. I don’t see an easy way.

“There are two big issues: 1) Identifying what’s fake news. 2) For each fake story identified, stopping its spread on Facebook.

“Both are problematic. How do you identify fake news at scale? You could either either use a whitelist — Facebook thinks any news not from these sites is to be trusted less — or a blacklist: Facebook thinks all stories are not-fake, unless they come from this ever-shifting list of fake sites.

If you construct such lists at the domain level, it would raise instant, difficult questions. What are the criteria for inclusion or exclusion? Part of a professional lobbying group? Some empirical measure of accuracy? Who keeps the list? Is it public? Will it have a political or social or geographic bias? Would we be allowed to test that? How will disputes be mediated? What happens to as site on the list when it publishes something that turns out to be fake? (For instance, how dozens of legit sites published that CNN aired porn last night.)

Facebook will have instantly inserted itself into a role that it, as a technology company, vehemently does not want to be in — and I’m not sure we, as news consumers, want it to be in. It would either become every country’s de facto ministry of Information, or it would be handing off that role to some other governmental or non-governmental group. It would instantly become a political entity almost as much as a business.

“Or maybe you do it at the story-level, deciding based on a multitude of signals from users and history with the site the probability that any given story is fake. And then either banning people from posting the story, or limiting its spread through News Feed.

But this system is likely to be gamed. Fake news factories can easily set up lots of domains or use well-known online publishers like Medium or YouTube or Blogger.

“And signals from users are bound to be polarized and consequently untrustworthy, because the very power of fake news — the reason we are trying to limit it — is because we’re worried that lots of people might actually believe it. So you’re really asking to build a computer that is capable of detecting “truth” better than lots of humans can.

“Then you get into even deeper philosophical questions about accuracy and epistemology. Let’s say you saw this headline: Independent Experts Prove Donald Trump Is A Billionaire. You click it to find a story by a news outlet you’ve never heard of before, by a reporter you’ve never heard of. The story says that Trump provided his tax returns and other financial documents to a group of independent accountants, and one of those unnamed accountants leaked to this reporter that Trump is worth $33 billion.

“Is that a fake news story? Yes, it is, but how do you teach a computer to spot it? Certainly lots of Trump backers would “trust” it; to them he is a billionaire because he says he’s a billionaire, and they’re not going to spend much time questioning the lack of evidence presented by this unknown source.

“You could look at lots of other shady signals — the fact that it’s from an unknown site, etc — in order to downlink it. But there are also some signals that it could be accurate; after all, a group of independent experts at Forbes has ranked Trump as a billionaire many times running, and pretty much every media organization refers to Trump as a billionaire (even though, to my knowledge, he has not recently produced any very solid public evidence that he is one).

“If a computer says it’s “true” that Donald Trump is a billionaire because lots of places online suggest that he is, and I say it’s actually fake to say Donald Trump is a billionaire because he has not adequately proven he is, who’s right?

“Play out such questions across millions of stories posted by billions of users and you’ve got a real headache. And presumably it will all be happening in semisecret — because the moment Facebook reveals what it looks for to spot fake news, fake newsers would exploit the recipe, so we’d always have to be kept in the dark about what qualifies as factual information on the world’s largest communications platform.

“And that’s not even to touch the question of how Facebook would inhibit discussion of fake news. If FB’s algorithm is not letting me post the story that Trump has been proven to be a billionaire, couldn’t I just post the news myself: “Hey, Facebook is suppressing this story, but independent analysts have proven he’s a billionaire. This shows you how shady these guys are….”

“Maybe there’s an obvious thing I’m missing here. But I can’t see how FB’s policing fake news won’t be a heap of trouble for everyone involved.”


RESPONSE: To begin, I will note that this response borrows elements from some of the smart discussion that occurred on Facebook in response to Farhad’s posting. Also, I bundle the bolded comments that are about the same theme into a unified section below.

  1. Facebook as a technology company/Facebook as the world’s largest communication platform: Of course Facebook wants to position itself as simply a technology company, but the reality is that it has become a media source in its own right. The “trending topics” section, whether curated by algorithms entirely or a combination of humans and computer, is a news source that guides users to what to read and watch. I agree with Margaret Sullivan; Facebook needs an executive editor, and to stop maintaining the fiction that it not part of the media. The world’s largest communications platform is most definitely a media source, even if it does not currently employ journalists. If we accept this premise, then Facebook’s responsibility to address the spread of fake news becomes apparent. Any reputable media source endeavors to be accurate, and will issue corrections when found not to be so. Facebook should not want to find itself included as part of the garbage Web.
  2. This system is likely to be gamed/Anytime Facebook reveals its fake news spotting algorithm, fake news sites will adjust their practices accordingly: Yes. All true. And so what? This is true of any human constructed criteria. As soon as something is committed to writing, people will endeavor to skirt the rules and intent of what is described. You could make this same argument about codifying a society’s laws into legal codes. This is why wealthy people hire expensive attorneys — to gum up the judicial system with spurious lawsuits and farcical interpretations of plain language statutes. This does not mean we should live in a lawless society. Similarly, we should develop algorithms that spot fake news in full awareness that these will always be imperfect and that we are entering into a continuous game of whack-a-mole.
  3. The difficult of inclusion and exclusion criteria/Deeper philosophical questions about accuracy and epistemology: I can completely relate to Farhad’s concern here. There will be millions of obvious cases of fake news (such as the assertion that Pope Francis endorsed Donald Trump). These could be addressed by algorithm. There will be millions of other cases in which the truth is in the eye of the beholder. I say A, you say B, our friend down the hall says C. Who is right, assuming there even is a right answer? And who has the power to decide? This is where human judgment will be needed — taking into account context, history, and the sources used to support key claims. True diligence in this area would involve following supporting links to their root sources and assessing how well those sources justify the claim made, as Tim O’Reilly points out. This would be the role of the executive editor Margaret Sullivan has proposed, as well as the people hired by that editor. This is slow, painstaking work — there is no way such diligence would occur at the same speed as the ability to post new “news.” So the best approach, as many people pointed out on Farhad’s post, would be a combination of computer algorithm and human curation. Both sides of that equation will be imperfect, as is the case for all human endeavors. Nonetheless a concerted effort to thwart fake news should yield much better results than the laissez-faire state in which we find ourselves now. We should not let the perfect be the enemy of the better.