Jeremy
Jeremy
May 16 · 5 min read

This is the first of a series of blog posts that will explore our idea of a system to peer-review the web.

Information crisis, misinformation, disinformation

The internet and its many perks have changed the way information is consumed. Before it, broadcasting content was reserved to a handful of parties around the world, few of which had global reach. Nowadays, it is trivial for anyone to maintain a blog, a YouTube channel, a website, or simply an active social media profile, reaching out to potentially anyone else in the world.

This new reach potentially generates infinite impressions, but also facilitates the spread of misinformation and disinformation.

It serves to distinguish the two. Misinformation generally refers to any occurrence of inaccurate information, with or without intent. Disinformation can be seen as a special case where there’s an intent to deceive; such falsehoods are akin to propaganda and it is not rare to see organized parties collude to help spread a false rumor. In the following, the former will be used as it is all-encompassing.

Misinformation has an adverse effect on democratic societies and the open information field as a whole: it makes increasingly hard to tell apart reliable and unreliable sources, and clouds the judgement of citizens where they need to make properly informed decisions.

Fact-checking outlets, filters, and effectiveness

Even dismissing the pervasive propagandas that have existed for a while, this information crisis isn’t new. Snopes, the first online debunking site, was launched in 1994. Politifact, another well-known website that specializes in fact-checking claims made by the U.S. politicians, dates back to 2007.

The interest in fighting misinformation was nonetheless renewed after more recent elections where accusations of information manipulation by third parties flourished. A number of initiatives have sprang up around the world and many of them are part of the International Fact-Checking Network, a unit of the Poynter Institute for Media Studies that aims to support a multitude of fact-checking groups worldwide that adhere to the IFCN’s code of principles.

Projects like Factmata are intent on providing a space safe of misinformation by leveraging the collaborative work of verified experts and fact-checkers. This also appears to be the eventual goal of Facebook. The company has been working on flagging and filtering fabricated claims made on its social platform.

While such steps are definitely the right direction, it’s unclear whether such initiatives will have the desired effect. Solutions proposed are platform- and case-specific, and, depending on the platform used, will require the Internet users to expend extra attention and effort to spot a dubious claim, open a new tab and research whether the claim is verified. Filters, on the other hand, could do more harm than good by leading to censorship of content.

Annotating the web to fight misinformation

Rather than requiring the users to spend extra effort and migrate to new platforms or consult fact-checking websites on a per claim basis, what if we integrate a pervasive and ubiquitous layer of tags and annotations into all web content? Such a system would enhance the information consumption experience rather than reinvent it. For example, hovering over specific claims in an article could supply more context or refutation corroborated by sources.

This approach isn’t new either. Annotated web dates back to the dawn of the web itself. In his historic proposal, Tim Berners-Lee already envisioned that one must also be able annotate links, as well as nodes, privately. In 1999, a company called Third Voice set out to turn any word on any Web page into an instant link connecting relevant content from various sources. Users would install a plug-in to annotate the web and could see other users’ annotations anywhere. Many projects also attempted similar solutions in the past, some notable ones being Dispute Finder or Fiskkit.

The quest for annotated web amounted to a W3C standard for interoperable annotations on the one hand, and Hypothes.is, on the other. Hypothes.is is a non-profit foundation that has developed what is probably the most advanced system to date in the field of web annotations, with private, group, and public annotations, and also a goal of being an open, neutral, community-moderated system.

So is this it? Is Hypothes.is the ultimate tool to solve the information crisis? Let’s see if we can go a bit further.

Open collaboration vs. vandalism

Third Voice debuted in 1999, 12 years before Hypothes.is. Why did it not succeed at the time? While no single reason will account for that, a major contributing factor was probably the enormous backlash it received for defaming websites, like a kind of “Internet graffiti.” Indeed, during its short-lived popularity, Third Voice already got caught in spammer’s crosshairs.

Could this very fate await Hypothes.is? Today, while it is an incredibly nice piece of technology, it is still far from being ubiquitous. What happens when it is? How many public annotations and comments to annotations will a single web page carry? Will the naive model of community moderation be successful in fighting spam? The question is even more topical after Gab recently launched Dissenter, another web extension that lets users add comments to any URL. The attention received has been so big that some web pages already gathered more than 4,000 comments.

This is a common issue across permissionless collaborative systems: having a layer that allows anyone to voice their opinion is one thing, but how can users sift through the noise when they want to without being subject to bias, censorship or manipulation? As we noted earlier, it’s not uncommon to see groups of Internet users tamper with community-based moderation systems by colluding and promoting intentionally misleading content. At the same time, it is desirable to keep the collaborative aspect and even remove central controls in order to avoid censorship and make the solution more likely to be universally accepted.

The appeal of peer-to-peer and distributed ledger technologies then becomes obvious. What if we could create a peer-to-peer, autonomous, self-moderated network for information verification?

If you are interested to discuss the idea, you can leave a comment below or join our Discord.

DeFacto Blog

DeFacto is a concept for a collaborative peer-review system for the web.

Jeremy

Written by

Jeremy

Distributed ledgers enthusiast

DeFacto Blog

DeFacto is a concept for a collaborative peer-review system for the web.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade