Check in 2018: What we’ve learned from seven years of “verifi-checking”

Early iterations of Check, then known as Checkdesk used to cover the 1 year anniversary of the 2011 #Jan25 protests by leading newspaper Al-Masry Al-Youm.

In 2011, when we started working on the challenge of assessing the credibility of information being shared online, there was no agreed upon nomenclature for the emergent practice we were trying to define, refine and support. Our early writings on the topic lean on the established idea of the newsroom fact-checker: the person assigned to meticulously check every fact asserted in a newspaper.

Fergus Bell’s Electionland workstation. Image courtesy of Alastair Reid.

We decided to call the tool we were building “Checkdesk”, envisioned as the fact-checking desk needed for the age when news breaks on Twitter. While we toyed with and are still quite charmed by the idea of Checkdesk as a physical desk (which we envisioned would look something like Fergus Bell’s Electionland workstation, seen left), what we built remained digital, around a simple idea, described by our then Director of Product George Weyman in March 2012:

Checkdesk is a desk where you check citizen media before publishing it.

Beautifully concise. George continues (emphasis mine):

It is not an attempt to automate truth. It is rather an attempt to facilitate collaborative verification. We are trying to tackle the very challenging problem of how to sort and verify the ever expanding output from citizen media through collaboration.

Seven years on, three things stand out from this early vision:

  1. Check has always been about collaboration
  2. Check has always been about helping people determine facts, supported by technology
  3. We talk about fact-checking, checking and verification almost interchangeably, though the primary verb we use to talk about what you’re doing with citizen media is verifying.

If we were redrafting this vision in 2018, there would be things we’d change and things we’d keep, but I propose we’d frame our mission:

Check is a desk where you check claims, links and citizen media. It is not an attempt to automate truth, but rather an attempt to facilitate collaborative digital journalism through open source technology. We are trying to tackle the challenging problem of how to sort and verify the ever expanding number of claims, links and media shared on social networks through collaboration.

Check is a desk where you check claims, links and citizen media.

The initial scope for Check was deliberately narrow, focusing on visual media and claims made by “citizen journalists”. In practice, however, from the outset Check has been used to investigate a wide range of content from sources spanning citizens, journalists and governments. For our journalist partners in Egypt at the time, checking official and government sources and media was every bit as important as checking citizen sources — something journalists in the US have had to get to grips with in new ways since mid-2016.

While verification and fact-checking have emerged as distinct, but mission-aligned fields, there is a fundamental principle that is shared: “claims”, whether a citizen video posted to Twitter or a fact in a politician’s speech, require investigation and the addition of context that helps ascertain credibility. The path to that context may be different, and require different techniques and tools, but binding the claim, its source (i.e. link) and “credibility context” remains a shared primary goal. With Check, we want to support both the fact-checker reviewing politicians’ stump speeches and the reporter verifying eyewitness media from a breaking news event (and the citizen checking state media in Egypt, and the researchers tracking misinformation in the Philippines, and…).

It is not an attempt to automate truth, but rather an attempt to facilitate collaborative digital journalism through open source technology.

The promise of automated truth telling — the idea that I can give a machine a link and it tell me whether it’s “real” or “fake” — is seductive, but deceptively complex and fraught with technical and social challenges. While we are supportive of technologists working on this challenge, our goal remains to provide the tools and training to help people gather the information they need to determine whether a claim is accurate or not. We see the process of investigation as one that is educational, and — if carried out transparently — one which in and of itself helps another person understand the credibility associated with a claim.

Trainees in Alexandria Egypt learning how to verify citizen and eyewitness content with partner Mada Masr.

This is not to say, of course, that we don’t see elements of the verification and fact-checking process that can be automated and streamlined. We want to make it as easy as possible for our users to check content, and so Check is built to enable integration with third party tools both for analysis (like Reverse Image Search) and workflow (like Slack and Keep).

CheckBot supports collaboration in Check directly from Slack.

We also believe that the way we structure the data created through the investigation process is extremely important, and have co-led the formation of the Credibility Coalition working to develop a shared data schema for credibility indicators. This will help users of tools like Check (and any others who want to adopt the standards) to create training data for machine learning without adding any steps to their existing process, while at the same time enabling platforms to more easily surface their “credible” content.

We are trying to tackle the challenging problem of how to sort and verify the ever expanding number of claims, links and media shared on social networks through collaboration.

If you start with the assumption that checking needs humans, as we do with Check, then the obvious subsequent challenge is one of scale and resources. Simply put: nobody has the resources to check everything, and even checking a relatively small amount of content can be labor-intensive. Our proposed solution to this problem has been, from the outset, to develop Check as a collaborative space. Our 2012 envisioning captures this ethos well:

Our hypothesis, if you like, is that by building a community of volunteer news hacks capable of jointly providing lots of pieces of small information, the journalist can better sort the accurate and important content from the dangerous, misleading or doctored content.
The Bellingcat community collaborating on Check to identify objects in EuroPol’s Stop Child Abuse campaign (January 2018).

We’ve seen this model embraced and pioneered by groups like Bellingcat, and over the past 18 monthshave worked on large-scale projects that have also demonstrated the efficiencies of collaboration. First Draft’s CrossCheck, for example, demonstrated the impact of collaboration on reducing the duplication of checking work across 34 French newsrooms. Further, we believe that there needs to be more structures, and enabling infrastructure, for more international, cross border and cross-language journalistic collaborations. Time and again, we see misinformation and disinformation that has already been debunked in one country re-emerge in another context.

The process of working transparently, and having to ‘show your work’ to newsrooms that would otherwise be seen as competitors resulted in higher quality journalism, with participants explaining that they were able to hold each other to account. — The Impact of CrossCheck on Journalists & the Audience, First Draft

Our collaborative ethos extends beyond the pragmatic question of scale, however. Our hypothesis is that if we are to meaningfully address the crisis of trust facing global media, then we need to find and build opportunities for a deeper, richer and more journalistic engagement between journalists and their audiences. We live in a world where the only way most audiences can engage with content is click, like, share, comment — forms of engagement that have largely been outsourced to the social networks.

We believe that media’s communities (or “audiences”) are smart, and have value to add to the reporting process, and so we are seeking to facilitate new forms of engagement: check, geolocate, annotate, translate.

A fascinating aspect of working on projects like Electionland and Documenting Hate has been seeing how productive collaboration can extend beyond newsrooms to include platforms, universities, human rights organizations, think-tanks and civil society organizations. If we are to fix the elements that are broken in our information ecosystem, it is going to require a broad range of aligned stakeholders working on checking together, supported by infrastructure that facilitates this collaboration.


Want to try out Check? Sign up now for a free trial!