Lies, Algorithms, and Journalists

ed bice
Meedan Updates
Published in
6 min readNov 23, 2016
photo by Ed Bice 11.08.16 11:25pm EST CUNY J-School

The President of the United States, the CEO of Facebook, and half the Internet are being asked whether our post-truth friendly media ecosystem has broken democracy. Craig Silverman and his team have made a pretty convincing case for the prosecution, showing that the top 20 fake news stories related to the US presidential election on Facebook — including the Pope’s Trump Endorsement, and Hillary’s Weapon Sales to ISIS — outperformed the top 20 legitimate election news stories over the same period.

While we have to somewhat disclaim the current hysteria with the observations that this is neither a new problem, nor one that always fits comfortably within the Macedonian-pro-Trump-fake-content-factories-spun-through-the-FB-newsfeed narrative, much of the world’s attention, we must acknowledge, seems to have fallen on the fake news problem.

We at Meedan are glad for the attention brought to our corner of the internet media world. Our media fact-checking and verification platform, Check, just said hello to the world with the ProPublica led Electionland project. As a founding member of the First Draft network we are developing and delivering media verification and fact-checking trainings to a global set of journalists. We have open sourced the Check code base to bring contributions from the global community of journalists, designers, and engineers who are motivated to improve our information ecosystem before our civilization plummets off the flat edge of this post-truth world. So, yes, this is our world and we do have a few things to say about fake news.

Here is the provocative lede: We will fail if we try to use algorithms alone to solve the fake news problem.

This failure will rest neither in the complexity of the algorithms needed nor in the ever evolving, always negotiable, often multi-lingual telling of the thousand truths that surround any single event in the world, though those are both daunting. Rather, it is this: an algorithmic approach will only deepen the true cause of the fake news problem — the increasing distance between media providers and their audience. Put bluntly, an algorithm that censors a feed does not perform a journalistic function, does not contest a lie, it just buries a lie. Removing content from the web is a fool’s errand. Censoring feeds is platform- specific variation on this errand.

This distance is partly geographic and partly ideological, but most importantly it is structural; we need better paths for journalists and their audiences to contribute meaningfully to the web wide task of verifying and fact-checking media. And we need these human contributions to in turn contribute to the machinery — the algorithms and interfaces — that compose our search, feed, and embed delivered view of the world. But the internet is a read/write medium you say. Yes, true, but Facebook’s commenting and sharing silos this energy into echo chambers of consensus, while the open and poorly filtered fields of Twitter too often bury meaningful RTs and @replies in the vitriol of trolls and the noise of bots. The current affordances for human flagging and blocking are cumbersome (my colleague Tom Trewinnard has documented FBs broken/onerous flagging UX). More significantly, though, they are private acts, sanitizing and narrowing my — and likely my friends’ — newsfeed. While the platforms are invigorating media companies bottom lines, Craig Silverman’s work suggests that they are not serving the larger enterprise of journalism.

And, this human solution to the project, Mr. Zuckerberg will be glad to read, can neither scale sufficiently, nor function with sufficient impartiality, if it is housed inside of the platforms. We already have an Orwellian problem with an overly centralized distribution layer, to also give the platforms the responsibility of determining what content is true will only deepen this imbalance. Instead, we need to look outside to third parties, and to the thousands of trained journalists whose essential skill is to sort fact from fiction, and, it might be added, whose profession is threatened by Newsfeed algorithms and Macedonian teenagers alike.

To be clear, I do feel that the social platforms must seriously ramp their internal editorial capacity to review, adjudicate, and pull content — in all the languages they serve — that violates their terms of service. The need for journalistic oversight in this area was put into stark relief in September of this year when it took down a post that included the iconic ‘napalm girl’ Pulitzer Prize-winning photograph from the Vietnam war, backtracking on this decision only after considerable public outrage, including the Norwegian prime minister reposting the photo in protest.

Distributing the work of verifying and fact-checking content to a distributed network, though, is only one half of the human solution. Just as challenging is the question of how a negative assessment of content should impact system behavior and affect user experience. If we are viewing this as a Mars Project, designing the system behaviors and user experience that will influence the feed and the reader’s experience is the landing system.

If suggesting that we distribute the work of verifying and fact checking links and sources to a globally distributed community of trained journalists and contributors is the idea in this post, suggesting that the platforms open verification and fact-check metadata write privileges to this community is the big, very big, ask.

If the platforms are serious about this challenge, and recognize that fake news is an issue across the internet, they should work to develop common media verification standards and APIs, and they should work to develop a shared visual language to expose the work of media verification and fact-checking professionals. This means developing a new flagging schema for verification, with copy and iconography that will travel with any share of a piece of contested content. While liking, loving, and the associated range of hearts and thumbs have understandably taken on strong branding overtones, the visual language we use to indicate the range of statuses from verified to contested to parody to debunked ought to be recognizable across platforms. We have worked with the smart folks at First Draft coalition to develop a set of statuses that can be assigned to any link brought into the system. As my colleague Chris Blow notes: “[S]pecific design guidelines seem under-discussed in the academic and industry literature. For new types of verification projects designed for sharing on social media, the visuals of verification are especially important.”

Having taken this much of your Wednesday morning to assert that the platforms should not think they can solve a journalistic task with beautiful algorithms but should instead take the leap of faith and empower select third parties, actual trained journalists, to help sort the wheat from the chaff, I will end this post with the teaser that we are releasing Check to select media partners, human rights investigation orgs, and fact-checking collaboratives over the coming months. And we are working with our partners in the First Draft network to refine and implement some of the ideas discussed in this post.

I strongly suspect that many of the current ideas we have for improving global digital media ecosystem will probably seem naive or misguided this time next year. What is important though, is that we work on these systems — that we bump into walls, break things, and, generally employ the fashionable fail-fast, deploy-often approach to addressing media and source verification and fact-checking. And, we should expect that this entire project, like a good algorithm, will need to adapt and evolve; the truth of fake news is both that it has been around since the telling of things began and the internet has been particularly friendly soil to such since its earliest years (see Andy Carvin’s real post on fake news from…er…1997).

I hope that the media consumers and distributors of the world will benefit from this moment in the sun for fake news and media verification. If the philanthropists, journalists, designers, engineers, media titans, and platforms of the world do not rise to the challenge and lies continue to dress themselves as facts in our new media ecosystem, then our societies — which are currently being asked to address massive human migrations, global climate change, and intractable wars in the Middle East —will be taking actions and making decisions with bad intel. And our journalists will be left writing A/B headlines for the Macedonian media titans of Facebook.

As John Dewey warned me exactly two weeks ago tonight as I left the CUNY newsroom, “We cannot have thinking without the facts.”

To find out more about Check, sign up here or send us an email at check@meedan.com

--

--

ed bice
Meedan Updates

working every day to make the web a bit wider and more worldly with colleagues @meedan