Ending the threat of misinformation online could be as easy as listing responding articles alongside the faulty claims.
Macauley Culkin is dead. Beijing residents watch fake sunrises on giant screens. A ship full of diseased rats is floating towards the UK. Look at this giant squid! This fruit is 10,000 times stronger than chemotherapy!
The web is full of bullshit.
So much bullshit in fact, that we have weekly debunking columns in major newspapers, like What was fake on the Internet this week. And whole websites like Snopes, Hoax-Slayer and TruthorFiction dedicated to debunking urban legends and misinformation. Buzzfeed dubbed 2014 the year of the viral debunk.
Despite all of these efforts, misinformation only appears to be growing in quantity and speed of dissemination.
This is a very serious problem.
When beliefs hurt us
It is hard to overstate how problematic misinformation is. It has the ability to cause quite extreme harm to both individuals and society in general. Information influences people. People believe what they see. And people act on what they believe.
So when 4chan made fake adverts which told people to microwave their iPhones, people actually microwaved their iPhones. That is real money being lost by real people.
Worse than losing a bit of money though, what about the prospect of losing your life?
Information like this image claiming that the Sour Sop fruit “can kill cancer” could lead people with treatable forms of cancer to avoid treatment until it is too late, preferring instead to eat a fruit which will actually do nothing to help them.
Misleading information costs lives.
It also influences law makers…
When The Daily Currant published a deadpan satirical piece about 37 people dying from marijuana overdose in Colorado, a police chief actually testified in court about the dangers of marijuana legalisation on the back of this information.
Or consider the numerous forms of misinformation online about vaccination. This article has been duplicated hundreds of times all over the web, and is simply a press release from another anti-vaccination website which polled its users about the health of their children, then compared the results of that self-selected biased poll against CDC health data. Completely unscientific. Completely unreliable. Completely wrong.
But thanks to the hundreds and thousands of articles like that one, and also thanks in large part to the fraudulent and retracted “MMR causes autism” paper, we now have outbreaks of diseases, like measles, in countries where that disease was meant to already be eradicated.
Outbreaks of preventable diseases also costs us millions in medical expenses, lost work, lost productivity and implementation of other precautionary measures to stop the disease from spreading.
Misinformation harms all of us.
These are but a few simple demonstrations of the fact. The reality is far worse than any quick demonstration can provide. The harms of misinformation pervade every aspect of society — from small wastes of time in our day to day lives, to the risks of being scammed, to the costs of constant ‘debate’ in the media and political halls over well established science. And so on...
If it were possible to wave a magic wand and remove all misinformation and associated false beliefs, the progress immediately made would be immense. While we don’t have the answer to everything, clearing out all of the bullshit would at least help us better understand where we actually need to work to improve our understanding.
We don’t have a magic wand — but we do have a centralised information source, and that system is programmable.
At the moment all of the information we access through the Internet is un-vetted. The information age is a double edged sword which brings harmful dangerous misinformation on an equal platform as factual educational information, and no clear, easy way to identify which is which.
What if we could turn the Internet into a realistic version of that magic wand? What if we could drastically inhibit the power and spread of misinformation?
Find a source you trust. And other bad ideas…
The standard advice when dealing with potentially false information is to check a source you trust.
This might make sense if everyone was vigilant enough to always check information. But we aren't.
It might also make sense if everyone was equally able to identify which sources are trustworthy through any means other than “that which agrees with what I already believe”. We can’t.
In other words, it is terrible advice.
Misinformation usually works because it looks legitimate. It sounds reasonable. It often fits our preconceived notions or plays on a bias we already have. So we don’t see any reason to ‘check’ it.
And then when we do check information, we go to ‘reliable’ websites which we have previously found agree with us. IE: We buy right into the confirmation bias that drives nearly everything we do. This doesn't help anyone avoid being misinformed; it simply helps reassure people of what they already believe.
Or to put it another way, you are actually more likely to reinforce existing false beliefs than you are to correct them by checking your ‘reliable’ websites.
The standard advice for dealing with misinformation is actually part of the problem.
Maybe we can remove dangerous information?
No. No we can’t.
Not only is such a concept completely out of touch with everything the web stands for, it is also essentially impossible.
There are a few other serious problems too, but this option is really just a non-starter, so lets look at the next option instead, which has some problems which overlap with this one anyway.
Can we build a central authoritative source of reliable factual information and use that to correct all misinformation?
This option is at least possible, technically speaking. In reality though it falls down in the trust department. Regardless of how it is decided to determine what is and is not reliable factual information, people will disagree with it, and thus will conclude that it is the system which is broken, not their beliefs.
We can verify this phenomenon by simply looking at successful websites.
Wikipedia is the most successful attempt ever made at being the world’s fact-repository. Despite requiring references for all statements of fact, and attempting to avoiding bias by being editable by anyone, Wikipedia still attracts critical analyses like this entry on conservapedia, and this entire website.
Similarly, Snopes is probably the world’s most well known debunking website, attempting to provide factual assessment of urban legends, myths and hoaxes. It is also regularly dismissed by people who disagree with its conclusions by arguing that it is biased.
Regardless of the website, and regardless of the position it takes, you can be certain that any popular website which takes a position will be dismissed as biased by people who disagree with one or more of its conclusions. And those people will tend to ignore and dismiss that website.
Knowing that, imagine if there was an attempt to systematically apply a correction system to the web. A system which works on every website, all the time, and corrects all errors…
No matter how perfect the corrections were (which is your first problem), no matter how cautiously the corrections were stated, the system would upset people who are being told they are wrong, and they would revolt.
They would find ways to uninstall or block it. They would complain about the “authoritarian dictator” system. They would demand the system be taken down.
‘They’ would be all of us, because no one likes a system which claims to know the truth of everything.
A centralised authoritarian provider of truth and corrections will never solve the problem of misinformation.
So what can we do?
Looking at the problem in summary:
- Misinformation is abundant on the Internet,
- it can be incredibly harmful to us as individuals and as a society, and
- the most common approaches to dealing with misinformation simply don’t work.
What can we do differently to fix this problem?
The key is to avoid the pitfalls of the above approaches. We need to challenge the misinformation without invoking authority or trying to censor the claim.
Interestingly, most of the webpages I have linked to throughout this article have been corrections, critiques, rebuttals and debunkings of the misinformation I am talking about. The web already has a sort of self-organised system of correcting misinformation, it is just failing to deliver the corrections at the most crucial moment: the exact place and time a real human encounters the misinformation.
So why don’t we make it possible to access critical responses from the content being critiqued?
Take for example that Apple Wave hoax. There is no shortage of articles warning people about it. But what good are they when they aren't accessible from the source of the misinformation itself?
People don’t always think to stop and Google an awesome new idea before trying it out. Of course “they should”, but they don’t. So why not provide a little non-intrusive alert in the browser which can be clicked, and provide a list of critical articles like this one on our left here?
Each of these articles are written by independent people, on a range of different websites, and are not controlled by any one entity or influence. Yet their listed presence alongside the hoax advert and easy access makes them effective at stopping the acceptance and spread of misinformation like this.
With this approach we can take advantage of all of the hoax busting websites and fact checking services (all of which are still largely under-utilised as they require people to know about them and actively ‘check’ them) out there. Take their critical analyses to the sources of the falsified information and tell people about them right when they see the misinformation.
This system empowers the correction services significantly, delivering their findings to the public much more effectively.
How It Would Work
The list of rebuttals to a given page would need to be constructed by real humans. People are still required to identify whether a page is critical of another or not, and will continue to be required until we are able to create artificial intelligence which can understand the intention of a human author.
Each list of rebuttals would be tied to a specific piece of content — usually a specific webpage, but as so often happens, the content of this webpage may be cloned, and all such clones would ideally be collapsed into a single entity within the system, so that any time a rebuttal was added to any of the clones, all other clones would reflect that new addition.
The system must also remain completely neutral. All rebutted, corrected, debunked etc content is only in the system because ‘someone’, ‘somewhere’ on the web has created a critical response to it. Just because a page has been critiqued does not necessarily mean it is wrong. It just means someone disagrees with it.
This does mean that valid content will find itself with rebuttals and contradictions attached to it. This is not a weakness in the system, but actually a strength, as it allows greater participation in the wider public discourse on the subject. Instead of pretending that dissent is non-existent and hoping that people won’t find out about it, allow the dissenters to identify their strongest critique, and then rebut in turn. Allow genuine open discourse, in this most structured of fashions, in a permanent system, to finally settle the issue.
Taking people from factually correct information to factually incorrect information is the most common concern I have encountered with this concept, so rather than get into the details of exactly why this feature is a strength of the system please go and read all of this article if this concerns you.
Ultimately we end up with a system which can be present in all browsers, always warning when specific content has been disputed elsewhere on the web, and providing access to the most compelling reason to not accept the rebutted content.
This would immediately inhibit the spread and acceptance of hoax material. It would also reduce acceptance of other false beliefs, but not necessarily by changing people’s minds.
Changing minds is hard…
When it comes to misinformation like the iPhone Wave hoax, corrections work great because the original claim is so clearly a hoax. But what happens when the claims are not so clean cut? What happens when people still want to believe the lie?
Most of the misinformation on the web would probably fall into this category. Claims which fit into an ideology or chosen world view, which are not so easy to change just by just providing a ‘correction’. Numerous scientific studies have shown that simply showing people that their beliefs are incorrect doesn't change their mind. Worse yet, it often reinforces the false belief; this is called the backfire effect.
So what is the point of providing critical responses, counter arguments, rebuttals, debunkings and refutations to every claim on the web, if people are just going to ignore them anyway?
Well, the reality is that people have strong beliefs about very few things. In fact, most of us have no opinions about most things out there. We develop interests in a tiny set of activities and ideas, and slowly develop beliefs about those things over the course of our lives. On the periphery of those central beliefs we have many opinions, and then the wider you go out from those few areas of knowledge the less attached we find ourselves to our beliefs and opinions, end up with vague notions that things exist but basically holding no opinion on those things at all.
For example, I really have no opinion at all on whether Inkjet printers are better than Laser printers or not. I have no belief on the matter of powerlifting vs olympic squats. I really don’t know whether it would be better for programmers to use tabs or spaces to organise their code. I have no idea about fashion design, materials manufacture, ecological management of natural environments, cigar culture…etc
Beyond the sphere of stuff we hold little to no opinion on, there is the infinite span of unknown unknowns. The stuff we don’t even know that we don’t know, and thus have absolutely zero opinion on at all.
It is impossible for me to list even a sample of those things for myself, by definition!
In this enormous expanse of non-opinion vs narrow area of strong belief, the war against misinformation is won in the wide open spaces.
You don’t try to change minds. It may happen, but that isn’t where you fight. Instead, work to ensure the neutral attitudes are exposed to good information, good arguments and wide perspectives when they first encounter concepts, and you help prevent the formation of false beliefs in the first place.
You destroy misinformation by preventing it from finding acceptance in minds and forming false beliefs. When the false beliefs stop being formed, people will stop spreading misinformation.
The Memetic Immune System of the Internet
Memetics is the concept that ideas are analogous to genes, and can replicate by being spread to other minds, competing with other conflicting ideas, evolving in the process as the most ‘viable’ ideas spread more effectively and are thus ‘selected’.
In this vision, the Internet is the ultimate memetic gladiatorial arena, where competing ideas fight for acceptance in our minds. Unfortunately, truth isn’t necessarily a primary concern when it comes to acceptance of an idea, and thus some ideas can be very ‘successful’ even though they are harmful to the ‘host’.
As we have already seen, misinformation is abundant online, and regularly harmful to us individually and as a society. By providing easy access to corrections, nuanced discussion, and the wider context of every idea encountered, at or near the point of first encounter, false beliefs will be less able to take root in new minds. The spread of misinformation will be inhibited as more and more minds find themselves well informed from the outset — immunised against bullshit.
With this memetic immune system in place, the web will stop just being a source of information, and start being a source of good information. Of reliable information. A source of context, nuance and understanding.
The misinformation will still be there, but it will be effectively transparent. Or perhaps more accurately, it will be used as a tool to train minds in critical thinking methodology. It will be a useful tool for showing how fallacious arguments work and decieve. It will be the fodder upon which critical analysis can be applied.
It will keep the web alive with vibrant living truths, instead of pallid lifeless ones, learned by rote:
John Stuart Mill argued that silencing an opinion is “a peculiar evil.” If the opinion is right, we are robbed of the “opportunity of exchanging error for truth”; and if it’s wrong, we are deprived of a deeper understanding of the truth in its “collision with error.” If we know only our own side of the argument, we hardly know even that: it becomes stale, soon learned by rote, untested, a pallid and lifeless truth.’
Carl Sagan (1934–1996)
The End of Dogma
It is hard to overstate the impact that widespread adoption of this memetic immune system will have.
Sure, people won’t all start changing their minds on firmly held beliefs over night, but incremental influences on belief formation and the spread of misinformation on the global scale that the web covers, multiplied by time, is massive.
To influence public opinion polls just a few percentage points can change the outcomes of elections. Compound that sort of change over years worth of people educating themselves online, in every country on the planet.
Imagine a world where every piece of misinformation and manipulative propaganda always has its virtual hand held by a systematic correction or exposé.
Imagine a world where every contentious issue is forced to engage in a centralised meaningful and progressive debate, rather than forever treading water as every individual, in each successive generation, each have the same conversation over and over again, forever repeating the same arguments with every new person met.
Imagine a world where no idea can hide from criticism, and no criticism can be stated without potential for contest.
Imagine a world where ideas actually have to be defensible, before they will be believed.
The generation who grows up only knowing this world, will be free from dogma, and will apply critical thinking as casually as we form opinions about pop stars.
We’re already on the way…
A prototype of this concept has been running for over 3 years now. It has collected over 30,000 rebuttal connections and attracted over 20,000 users.
You can see the prototype in action by installing the rbutr browser extension in Chrome or Firefox. This prototype most closely resembles what I have described here: a sentinel which constantly provides access to correcting/rebutting content whenever it is available. But that is just one method of interacting with the claim-rebuttal database.
There is a simple URL hack which can be used to check if a page has known rebuttals. Simple type “rbutr.com/” at the beginning of any URL, and the same page will be reloaded in the rbutr frame. You can do this on any web page, in any device.
Other uses already developed include a twitter reply widget which searches twitter for people sharing the rebutted content, then letting you reply to them with the rebutting material (seen in the lower left hand corner of the image here) and a website plugin which allows webmasters to specify regions of their content that will be checked for rebutted links, and then tagged with a rbutr logo. Mostly useful to forum owners or comment sections, for example.
We’ve also considered making the plugin check links on specific pages, like Facebook, Google and twitter, in order to alert people to responses before they even click through to the page. It has become commonplace for people to just read the headline and the page preview snipet and reach conclusions just from that, so the alert really needs to be provided alongside those previews. Ideally Facebook, Google, Twitter and other similar websites would access the database themselves and provide these alerts to all users, rather than just the tiny subset who have heard of rbutr and installed the plugin.
The number of different ways such a database could be used are limitless, and we want to see it used extensively to fight the scourge of misinformation.
How we proceed from here though, is still being determined.
After 3 years of operating this project almost entirely out of our own pockets (no revenue, no institutional support since Startup Chile in 2012), and with development now coming to a halt, we are going to need help continuing, or finding a new way to deliver this concept to the web.
I don’t think that there is any one right way to proceed, but I am confident that this principle must be developed in an open and transparent manner so that its reach can be as wide as possible.
Whether we continue to expand and improve the rbutr system, or start again from scratch with an open source working group dedicated to making this happen, I don’t know. Whatever path we go down though, I know that we can’t do it alone.
In this article I have tried to convey how important I think the problem of misinformation is, and how I believe that this is the only approach which will work. I think the web needs this system in place in order to ensure a more positive future for everyone. If you agree at all with that premise, then we need your help.
Tell people about the project. Show them this article, or rbutr. We need programmers. We need technical people to help oversee and organise the standardisation which will allow it to be used by numerous other websites and services. We need contacts who can fund the project, and contacts who can bring this technology into the systems which need it (Facebook, Google, Firefox, Chrome, etc)
Help us deal a significant blow against the biggest problem facing the world: Misinformation.
You can reach me at shane at rbutr, through our facebook page, or if you are able to help us continue developing rbutr as an open source system, then visit our new subreddit and help us organise the effort.
Answer our questionnaire too, to help us understand what people want to do and are able to do to help. Thanks!