Google recently announced that it had demonstrated the ability for it to alter its search rankings based on how factual the content within webpages is. Amongst all of the fanfare around what so obviously seems like a great idea, I’ve not yet seen anyone else explain why this idea is destined to fail to have any significant impact in the war on misinformation, so I will: No one likes being dictated the truth from an impersonal authority figure.
First of all, this feels like a fantastic initiative. Google has become one of the primary sources of information in the world, and any effort to clean up the quality of information it presents must be a good thing. Especially when you consider the harm that factually incorrect information can inflict.
Misinformation can cost people their life saving. Misinformation can cause countries to go to war when they shouldn't. Misinformation can collapse global economies. Misinformation can cause outbreaks of disease once thought eradicated. Misinformation can cause the destruction of the environment…
I believe that misinformation is the biggest problem facing the world today, and we should be devoting a lot more resources to fighting it. But this isn't how that battle will be won.
Anyone with even the most casual interest in human beings will understand that individual humans think that their beliefs are true. If they didn't believe they were true, then why would they believe them?
So we are all walking around, quietly confident that everyone else, as nice as they are, and as hard as they try, are all sadly struggling to figure out the nice collection of facts and answers we personally have managed to figure out over the course of our clever little lives.
Now imagine that Google implements this system in a way which actually alters their search results enough to push false information down and out of view.
We search for information one day on a subject we know something about, and suddenly all of the results are wrong. Every single one of them. Every page in the search results provides information which is factually incorrect and misleading to the public.
That wasn’t the case in the past when we searched on this subject. Sure, some of the results were wrong…but that’s because there are stupid people out there who hadn’t figured this stuff out like we had… but now, Google seems to only have misinformation in it! What on earth is happening…?
There is only one answer. Google has gone down hill. It is unreliable now. Either incompetent, or simply part of the problem: attempting to control information for its own benefit. It is probably being paid by lobbyists. Or manipulated by secret government forces. Or lizardmen. Name your poison.
Whatever the true reason for this failure, Bing suddenly looks really good…
This is how this change will go down if Google actually implement this concept to any extent which actually alters search results. People won’t be updating their false beliefs just because they can’t find information in agreement with them on Google. Confirmation bias all but prevents them from doing that.
Instead, people will see Google as failing, and go elsewhere. And what is Bing going to do? Decline the new increase in marketshare, and implement a similar system? Unlikely. And if they did, then the displaced masses will find a new search engine (duckduckgo?). Or they will build their own search engine! Whatever it takes to feel like they are free to believe whatever they want to believe, because that is what humans do.
No one likes being told what the truth is from some impersonal unchllengeable authority figure. And you can’t make misinformation go away just by hiding it out of sight. Censorship never goes down well on the web, and I can guarantee you that if content about the ‘moon landing hoax’, and ‘9/11 truth’ and the ‘vaccine-autism link’ start disappearing from Google’s results — there will be lots of complaints of censorship. And the illuminati new world order will probably be invoked to explain it too.
So basically Google won’t do it, not because it won’t have a positive impact on large portions of the population, but because it will cost them a large portion of their market share. And being a business, that doesn’t make sense.
It has already begun…
I don’t even need to predict this happening anymore. I delayed publishing this article, and in the time it has taken me to publish it, numerous articles have come out already complaining about the system. We have this Fox News regular saying that we need to “let the public decide what’s the truth”. We have the National Review with a headline of “Google to bias search engine based on ‘facts’” — not only do they refer to it as a bias, but also put the word facts in inverted commas, just in case you didn’t already doubt their ability to accurately determine what is and isn’t true.
And this is just the few I can be bothered posting...
A real solution
If you want to fight back against misinformation, you must embrace the fact that misinformation exists, and that people believe it. You have to actually bring it out into the sunlight, and let the debate take place in full public view in a structured, permanent manner.
The internet already automatically fights back against misinformation. Try writing an article which holds an opinion on anything. If that article has any significant number of viewers, then you can pretty much guarantee someone will take the time to tell you you are wrong. Hell, I’m here telling Google they are wrong right now! And I hope I can reach enough people with this article that someone will take the time to tell me how wrong I am too.
This all exists, already, all over the web. The problem is just that they are ‘all over the web’. There is no structure to them. No organisation. No relationship map. No way of finding the corrections from the corrected information.
Until 3 years ago.
3 years ago rbutr launched a beta version of its browser plugin. It is a prototype of a concept which could actually end the impact of misinformation online. It does it by providing access to the best counter-arguments, corrections, and critical analyses known for a given webpage .
This concept, unlike the approach Google have taken, does not rely on hiding misinformation. Instead, misinformation acts as a launchpad into an idealised debate.
John Stuart Mill argued that silencing an opinion is “a peculiar evil.” If the opinion is right, we are robbed of the “opportunity of exchanging error for truth”; and if it’s wrong, we are deprived of a deeper understanding of the truth in its “collision with error.” If we know only our own side of the argument, we hardly know even that: it becomes stale, soon learned by rote, untested, a pallid and lifeless truth.’ -Carl Sagan (1934–1996)
The system behind rbutr does not attempt to know what is right or wrong, true or false. Instead, it allows everyone and anyone to participate in a global debate, where the best counter-arguments are presented against any claim, and it is up to the reader to follow the debate and decide what position they ultimately take.
The method to achieve this is incredibly simple. It records connections between rebuttal/correction/critical webpages and the webpages they are disputing. Once a connection between two pages is made, that connection can then be accessed by a browser extension which tells you about the dispute when you view the disputed content (example).
With widespread support, this central database of mapped connections could be easily standardised and accessed by a whole range of third parties. Google could indicate that rebuttals exist to content listed in their search results. Facebook and Twitter could indicate when a tweet or shared link has been disputed. Chrome, Firefox, Safari and IE could include alerts about disputed content natively in the browser.
It is worth noting that none of those alerts would mean that the disputed content is wrong — maybe the critique is wrong. But with widespread adoption of such a system, clearly false hoaxes (like the iPhone wave hoax) would be stopped dead in their tracks. And people investigating new concepts for the first time would have access to the full story, rather than potentially only getting one narrow perspective, and getting a faulty, skewed view on the issue.
A generation growing up with constant exposure to critical analysis of complex ideas, and a constant reminder to doubt unsupported claims — sounds like the critical thinking heaven we have all been waiting for. And a world full of critical thinkers is the only way to stop misinformation.