Aram Zucker-Scharff
5 min readNov 2, 2016

--

It is highly unlikely that Facebook will become some sort of newsstand for ‘trustworthy’ publications. Even if they thought they could, they shouldn’t, and we shouldn’t want them to.

Here are some of the reasons, in my opinion, it wouldn’t happen:

Legal Danger

If Facebook decided to step in and determine what sources whose content it was hosting (and ignoring the questions of if posts count, because they do, Instant Articles is FB directly hosting content) was legitimate and which should be excluded from trending it would endanger its standing under Section 230 of the CDA, which protects ISPs and content hosting services like FB from liability in case the content they host is libelous, treasonous or otherwise illegal. They are already in a pretty gray area not well defined by law and to further exercise editorial control would endanger their legal standing. I feel fairly confident that these problems are part of the reason why they dropped their human editors.

We can talk about how this works here or what it means for something like Snapchat, but I’d argue, and I’m sure a lawyer would too, that showing an icon vs controlling actual content in a box seen by all users of the site is a meaningful difference. And why would Facebook chance taking on additional liability when it doesn’t have to?

The web is open

Unless every publisher shut down their open feeds, closed their sites with paywalls and aggressively (and likely impossibly) prosecuted aggregators under copyright the content would be on the web, freely readable by Facebook’s crawlers or via sites that aggregate it. There is no incentive for Facebook to pay for something that is free. In the unlikely chance Facebook actually wanted to make these determinations they could do so and take the content for free like they do now. Even if those sites closed in every possible way, they can’t really close in a way to make it impossible for Facebook to get at it. I know, I make tools that do that very thing. See: Apple News, Google News, even Nieman Lab, WaPo and NYT’s ‘trending across the web’ modules. We can all complain about how we ‘let people get used to free’, but honestly that’s how the web works and it wouldn’t work as a service if it didn’t.

Machines and humans don’t share the same values; neither do lawyers and humans. Or even humans and humans.

Facebook wouldn’t attempt to determine “trustworthy” media outlets because such a thing is impossible to determine in a mechanical way. Or a legal way. Or really even a reasonable way. We can say that NYT is trustworthy and Brietbart isn’t, but how do we represent that in a way that A: a machine making these decisions can understand and B: a lawyer arguing that his site is trustworthy can be shot down? The answer is that you can’t. We can say X number of stories incorrect, or Y number of sources discredited. But you can’t teach that to a machine, especially when machine learning learns from humans and humans think otherwise. Even if none of that was true, what happens when you and Facebook disagree?

That’s the smallest possible problem with ‘determining trustworthiness’, what if two posts go trending: one a true statement on a politician’s website; the other a false statement on a competing politician’s website? Is preventing one from showing up as trending interfering with an election? How can Facebook justify it’s actions to supporters who believe a provably false claim to be true? What does it do to keep them from leaving? How deep into additional journalistic duties must it dive in order to justify its actions?

No. Any sense of ‘trustworthy’ for a publication will have to be determined algorithmically, on a per article basis, and there is no reason they can’t do that using free scrapes of external sites like they do now. To their credit, Facebook is indeed attempting to do that, according to their own reports.

Facebook is not now, nor should it ever be, in the business of fact-checking.

Deciding how trustworthy a publication might be is within the domain of media functions. Taking on that function as part of an output of content would further align Facebook dangerously in competition against publishers, threatening the tenuous state of affairs between them and our industry and frankly it isn’t in publishers’ best interests. If Facebook is considering the trust-level of publications, why not candidates? That’s an Orwellian-level slope I, nor anyone else, should have an interest in slipping.

At best it can attempt to determine with machine learning if individual posts are satire or fraudulent and attempt to adjust accordingly.

Publishers should not become that dangerously dependent on Facebook

Editorial operations are already frighteningly dependent on Facebook’s traffic to fund their operations, this would put more control over the survival of any given publication into Facebook’s hands. What if you’re dependent on that subscription to survive? What if Facebook gives you a choice to kill a story or keep on as a ‘trustworthy’ trending partner? Facebook has a lot more reach than a physical corner newsstand and as such has much more power than we would want overseeing a revenue stream in that sense.

It isn’t neutral and few are more dependent on net neutrality than the new digital media

It becomes anti-competitive when Facebook’s yays or nays on a publication could mean the difference between if it lives or dies. Facebook doesn’t want to open itself up to that liability and publications don’t want it to try. How does a new publication get its start if it is filled with trustworthy people and no content to make Facebook understand it is trustworthy? If it can’t have a chance of going trending it becomes harder to get advertisers and harder to make a living as a publisher. Facebook determining journalistic have or have-nots with direct human choice challenges the principles of net neutrality and would threaten democracy.

What happens if a government decides that a publication isn’t ‘trustworthy’ and tells Facebook to make it ineligible for trending?

Facebook isn’t an editorial organization and shouldn’t be making editorial choices.

This would be forcing Facebook to act like a editorial organization and, despite wild claims to the contrary among journalists, it isn’t one. It has no central editorial control, it doesn’t create its own content. No one is sitting at Facebook HQ and actually editing individual posts. Facebook isn’t an editorial organization and we don’t want it to be. Once again this would threaten the balance of power between the social network and media companies.

--

--

Aram Zucker-Scharff

Director at The Washington Post. Dev w/ @pressfwd. Technology solutions for journalism probs. Opinions & data my own. Prev: Salon.com, UPI, Altac@GMU.