We won’t solve the misinformation problem with systems which adjudicate on truth or on who to trust

Shane Greenup
15 min readJun 18, 2018

--

As you can see in the list of active efforts to solve the misinformation problem, nearly all of them are attempting to address the misinformation problem by, in one way or another, adjudicating on truth or trust.

It is the obvious solution after all. There is too much misinformation out there, and we do need to figure out what is and isn’t true, in order to get rid of the untrue stuff. Or at least warn people that it is untrue. Or prioritise the true stuff.

Or maybe “truth” is too hard, so we need to figure out who is trustworthy? Can we resolve a list of all of the people with solid, reliable reputations for accuracy, and attention to detail? Or maybe discern between all the websites which are renowned for telling the truth, and all the ones which are renowned for telling lies and manipulating facts to fool people? If we can just make a system or process or community structure which exposes these people, or organisations, or websites, or whatever, then we can let people know what is and isn’t trustworthy.

This is the obvious answer to the problem.

And the wrong one.

http://chainsawsuit.com/comic/2014/09/16/on-research/

Whether we are trying to identify truth, or definitely-not-truth, or reliable or definitely-unreliable — all of these solutions have one central flaw to their ultimate execution: That they reach a conclusion upon which they themselves can be judged.

Now obviously, reaching a conclusion is exactly the point, so why is that a flaw?

It is a flaw because if you are trying to solve the problem of misinformation, then you need to provide a solution which is accessible by everyone, across the board, regardless of their beliefs. And any system which reaches conclusions that disagree with the beliefs of the individuals using the solution, no matter how much tact is used in the delivery of those conclusions, those individuals will judge the reliability of the system itself as poor, rather than re-evaluating their own beliefs.

The more fringe an individual’s beliefs are, the more quickly they will likely find the system to be ‘worthless’ or ‘propaganda’, but don’t think for a second that this is only an issue with the fringe dwellers, because false-beliefs is a mainstream problem! Need I remind you that Donald Trump was actually voted in? Do I need to pull up stats about how many American citizens believe the Earth is less than 10,000 years old? Or how drastically the public opinion tends to deviate from the opinion of scientific experts in their fields? Do I need to point out that at most only one of the religions on the planet might be correct, and thus everyone else who doesn’t believe in the correct world view, is holding onto a false belief?

Holding false beliefs is our natural state, and when systems designed to correct misinformation repeatedly tell us that our beliefs are in fact wrong, or that the people we trust are not trustworthy, then we are far more likely to simply dismiss the system itself as a failure rather than change our beliefs. (Do I need to expand on this point? I think everyone is pretty up to date these days with the long list of cognitive biases and how central confirmation bias is to how we deal with the world, right?)

Changing beliefs is hard, and takes time, work, and a desire to even consider changing them, and no amount of being told we are wrong by an impersonal system/organisation/community is going to make it any more likely.

But people want a quick answer!

The argument I always hear is that most people don’t have time to figure everything out for themselves. They barely read the original article much less the fact checks articles, or less likely, do the leg work and fundamental research themselves.

This is absolutely true of course, and people do want to the quick and easy answer most of the time, but that doesn’t mean that giving it to them is the right move. Most people also don’t like regular exercise or eating healthy food, but solving the obesity epidemic is still going to involved increases to exercise and improvements in healthy eating. Sometimes things can’t be solved with quick easy fixes, and pretending the quick easy fix is a solution is counter-productive.

Of course, the fact that many people barely read past the headline, and that very few people investigate content at all, are definitely facts which need be kept in mind in creating a solution. But to use intellectual laziness and/or time-poorness as a reason for giving people a quick and easy answer, which fails to solve the problem and may actually make it worse, is simply dangerous.

What is “Solving the problem” exactly anyway?

Before I continue it seems worth quickly clarifying what I think the real problem is here that we are trying to solve.

Misinformation isn’t an information problem, it is a false belief problem.

The problem we are trying to solve, is that people believe things which are not true. And they share those beliefs with other people, and attempt to persuade them into agreeing with their untrue beliefs. This is where misinformation comes from. Yes misinformation causes false beliefs, but not before the false beliefs cause misinformation (with few rare exceptions).

To solve the problem of misinformation, we either need to end all false beliefs (ie: change everyone’s minds to the ‘truth’ (or to be ‘less wrong’ at least)), or stop people from beginning to believe them in the first place (ie: improve skepticism and critical thinking skills).

In either case, the solution lies in working on the people, not on the information.

“It isn’t about changing minds…”

When I make the above point that changing minds is hard, I always get push back that these systems aren’t necessarily about changing the minds of true believers, but of avoiding people falling for obvious misinformation, or helping people get the right information to begin with.

I agree completely with this sentiment and take the same approach with rbutr and the Socratic Web. The difference though, is that if you are adjudicating on any issue, then the people who have made up their minds on that issue are judging your results. Sure, you’re not trying to change the minds of a climate change denier when you indicate that news on the dangers of climate change is real and accurate, but you are trying to keep that person as an engaged user. You do want that person to respect your conclusions and listen to them in other arenas that they are less certain of. But you just insulted them and their beliefs, and now they trust you less. Do it a few more times, and they might well just write off your entire system.

What good is a perfectly executed system which no one trusts?

“Let the fringe idiots exclude themselves…”

I’ve heard this response too many times to ignore it. The idea that this is just a problem with a few people who can’t be helped and should just be pushed to the sides to let the rest of us get on with our rational well informed existences.

No.

Let’s get something straight — there is no such thing as “Fringe Idiots” and “Everyone else who is reasonable and open minded.” We’re all ‘true believers’ about the things we believe. And since we’re all mostly wrong about most things (see politics and religion for a start. Consider several thousand years of philosophy, and gain an understanding of cognitive biases as a second step. Then realise that there are an infinite number of ways to be wrong about every single idea, and only one way to be right.), most people will repeatedly encounter conclusions which clash with their beliefs.

This will happen less frequently to some people, and more to others. And the people it happens less frequently to might even relish in the encounter, and see it as a challenge to see what reasons are given for their belief to be so contradicted! But those people are a tiny minority of the population, and in fact, generally, the wrong audience to be working with, because they are already the people who clearly seek out contrary evidence to correct false beliefs. Meanwhile, all the people who do resent being contradicted, and do hold a lot of false beliefs… they’re the ones who are going to quick and ruthlessly dismiss your system and proceed to reinforce the new belief that ‘your system sucks.’

Now how useful is your system for helping those people? Those normal, everyday, non-fringe people, when they encounter neutral information which they are not emotionally invested in? Depending on how thoroughly they have ingrained the idea that your system sucks, it could actually drive them to double down on the misinformation. I know it sounds stupid, but…seriously… if you don’t believe what I am saying right now is realistic… please, just spend some more time arguing with strangers on the Internet; if someone sees an article which they were otherwise neutral on, and sees that your system (which they think “sux”) says that the article is unreliable, or untrue, there is a real chance that they will see the article as more reliable, rather than less.

Your shorthand solution to help the world quickly sort true and false, reliable and unreliable, all within the bounds of the time-poor, over-informed, attention-saturated world, is now backfiring on the people who most need to improve how they consume information, and not really doing a lot to help the people who already have good information consumption habits.

Taking a position is the problem

Any solution to misinformation which reaches a conclusion, formulates a ruling or otherwise adjudicates on any issue which people hold varying beliefs about, will invariably upset and offside people. The more conclusions reached by a system, and the more people who interact with the system (increasing the opportunity for personal beliefs to clash with system conclusions) the more likely the users are to become disillusioned to the effectiveness of the system.

Any decent solution to the misinformation problem — one which remains accessible to all people all of the time, acting to address the problem — must be one which doesn’t disillusion an increasing number of its users, pushing them away from its applied solution, undermining its own purpose.

I believe that the only way to avoid this problem, is to build systems which never ever take a position on any issue. They must remain neutral on all issues of truth and trust.

(PS: outsourcing the conclusions to third parties, or presenting conclusions in clever, deferent language isn’t enough. The system itself must not form conclusions or ratings or evaluations at all.)

“But our system doesn’t tell people what to think…”

Maybe not overtly, but by reaching any conclusion, on any aspect of quality or trust, or even categorisation of information, your system is communicating a preference for a belief which may be at odds with what the user thinks.

You may not be telling people what to think, but you are trying to act as a proxy for what people should believe, trust, or… well, think. Users are trying to figure out whether your system is a reliable proxy for their own judgement or not, and every instance they encounter which shows they that you are not, (by contradicting their existing beliefs) is clear signal to them to not trust your system as that proxy. Undermining the very purpose of your system…

And the tricky bit is, this applies to all judgements, not just the final judgements on the article or author at hand. That is, if your system creates a list of signals for the user to make up their own mind, how you determine those signals, and what you say about them, is also expressing a series of judgements that the author may see as weak or unreliable. If you rely on wikipedia as an indicator, you would put-off all the people who think wikipedia is unreliable. If you use large media companies as reliable sources, then you would put-off people who distrust mainstream media. If you throw up a lot of negative signals on an article which strongly appeals to the user, then that could put them off.

That said, the softer the approach the more resistant the system is likely to be to public judgement. However, the point remains the same, that any judgement is going to be judged in turn. Even the small ones.

“Nothing is perfect. Stop nitpicking.”

The reason I am focusing on details which may seem to be minor annoyances to the average user, and therefore largely inconsequential to the overall success of the system, is because that fundamental belief is the problem.

This isn’t a minor annoyance to the average user, this is their core experience and final determinant of whether they will want to use your system or not. And because my focus (at large, in this article, and in the list I published in the previous article) is on systemic scalable solutions to “the problem”, anything which makes people unwilling to engage meaningfully with the system basically excludes it from being a solution.

Every attempt to solve this problem which inevitably resigns to the fact that “we have to decide something somewhere…” is inevitably turning themselves into another media company which people will pick and choose based on how their beliefs align. It might be the best source of information ever created… but that doesn’t mean you won’t have half of the global population mocking it and calling it biased and choosing to exist in a competing information ecosystem which uses different metrics and systems for reaching a competing set of conclusions. Just like we already have in the current media environment. ie: not a solution.

Google and Facebook were able to gain their monopolies because their systems allowed everyone, and adjudicated nothing. Their methods for measuring what content you encountered were fundamentally based on popularity, not truth or accuracy or reliability, and the methods used for doing so were generally neutral in the way they interacted with what people believe.

Any solution to the problem of misinformation must equally be neutral to what people believe, and must be at least as approachable as Google and Facebook have been to the general public.

The simple thought experiment is: If Facebook, Google, Twitter, et al, implemented my solution for misinformation tomorrow, in a way which no one could opt out of, would they lose users?

And I am telling you with absolute confidence, that if your system makes judgements about facts, credibility, biases, or reliability of sources, then the answer will be yes.

Evidence

OK, I know this all, like, just my opinion… so let’s look at the evidence we already have.

For a start, Snopes has this fantastic page about the biases they have been accused of having:

Any system which attempts to give factually objective answers will invariably upset just about everyone from every belief.

Please, read the as many comments in their list as you can stomach. It will really help you understand the point of this article.

By attempting to be an objective source of factual information, inevitably, Snopes has upset people from many different political and religious and ideological perspectives, and, as you can see by the published comments, made a lot of people decide that Snopes is unreliable.

Does this mean Snopes is unreliable? Of course not. Does this mean Snopes is useless? Of course not. What it does mean is that there is no possible way that Snopes can be ‘the solution’ to the problem of misinformation, because too many people don’t trust it. Not that I am implying that Snopes is trying to do that, they aren’t. But that is what we are all doing with our plugins, and AIs and blockchain solutions, right? So why would our efforts to adjudicate on accuracy and reliability of news give us any different results?

Oh, because your system doesn’t rely on biased individuals, but uses the crowd to decide things? If everyone is deciding, then no one can be help responsible and judged?

Tell that to Wikipedia.

Wikipedia is built on one of the most beautifully simple and transparent systems there is, which allows ‘the crowd’ to work together to establish ‘truth’ in a way which is evidence based and which links to references. It is a fantastic model, and it works really well. There are limitations and flaws, no doubt, but show me a system which doesn’t. Surely the world over loves and defers to Wikipedia for adjudication on truth!?!?!

“Wikipedia is …financially supported by grants from left-leaning foundations [and] …most of Wikipedia’s articles can be edited publicly by both registered and anonymous editors, mostly consisting of teenagers and the unemployed. As such, it tends to project a liberal — and, in some cases, even socialist, Communist, and Nazi-sympathising — worldview, which is totally at odds with conservative reality and rationality.” — How can you argue with that?

Conservapedia doesn’t seem to take a favourable view of Wikipedia, and I don’t think it is just because they are competitors! You can see in their description of Wikipedia why it is that they hate it — it has atheists and philosophers running it who have a ‘liberal bias’ and believe in things like Evolution!

I think many people ignore this example because it is easy to mentally categorise Conservapedia into the fringe section, as not representative of normalcy, and that most people don’t think that way. Sorry, but this is normal. This is how people react to disconfirming evidence. All you need to do to prove this, is to go and pick a fight with someone on social media about climate change, or GMOs or vaccination. Then show them evidence from a decent respectable source, and I bet that their reaction, more than 50% of the time, will be to dismiss the source itself as biased.

I have seen this play out hundreds of times in my own personal arguments online. Unfortunately, because these interactions have taken place on Twitter, in Facebook comments, and in Reddit over the course of years, I can’t easily find and screenshots them to show you examples. However, you can easily browse Reddit for how people respond to resources which fact check for them. For example, this post:

I’ll keep an eye out for more examples of this over the coming weeks and perhaps add more as I find them. It shouldn’t take long…

One more time for those at the back

OK, I’m basically done now, but I need to restate it one more time, because my point here is crucial. The argument I am making here is utterly central to the problem of misinformation: That people can pick and choose their information sources so liberally now that any belief can be maintained, and any source of information can be easily ignored. If your solution to the misinformation problem essentially becomes one more source of claims/beliefs/information/positions, then you haven’t solved the problem, you are contributing to it. You are just one more “biased source” to be ignored by people who disagree.

I’m not raising some minor point which will be dealt with. I’m raising a fundamental philosophical failure of virtually every attempt to address “the problem” of misinformation. Unfortunately, so far, my experience of attempting to raise this point is to be met with a mixture of silence, or quiet agreement, before resuming work on the very thing we just established literally can’t solve the problem.

So to try once again, to put it as simply as possible:

Any attempt to make a solution to “the problem” of misinformation is necessarily attempting to address an issue of the global population. A solution for everyone. And in order to work for everyone, everyone must be able to comfortably engage with that system in a trusting way which doesn’t create cognitive dissonance. They have to be able to use the system without simultaneously thinking “I don’t trust anything this system says”. Or at least, their lack of trust of the system must not interfere with the effectiveness of the system. So any system which requires people to trust its conclusions/judgements/categorisations/etc, cannot risk creating distrust.

Unfortunately, by making judgements on truth, trust, bias, etc, you will be judged for it. There is no avoiding it. And when you are judged for it, just as the comments about Snopes above shows, those people will throw out the baby with the bathwater and completely discount everything you have to say from then on. Your entire platform becomes ‘a propaganda machine’ of no value to that person.

The only option to avoid this, is to create a system which doesn’t make any judgement of any sort on the factuality, reliability, trustworthiness or accuracy of the content users engage with.

It is perhaps worth noting that many people don’t trust Google or Facebook — but that lack of trust doesn’t generally translate to a lack of trust in the content they get through those platforms. Google, Facebook et al present content from all ideologies and political persuasions and religious beliefs etc equally and without judgement. So even though Facebook might be a “lefty liberal commie website”, you still know that you can go on Facebook and get all of your alt-right and nazi sympathising content reliably and without overt judgement from the platform itself.

And for all of the people pointing out that Google and Facebook do make judgements in their algorithms, and that it is impossible not to, the difference is what is being judged. They are both judging, at their core, popularity. And that works. At least, it works for keeping most people happy. It doesn’t work so good at getting to the truth or facts. So as long as your system similarly avoids making any sort of judgements on the content itself, but otherwise evaluates how popular that content is and how engaged people are with that content — then great! Bring that on. Figure out how to use that to help people challenge their beliefs and reconsider evidence and their arguments without feeling judged by the system, and you’re onto a winner!

For now though, I think we need fewer attempts to solve truth and trust online, and more people working to build something like the Socratic Web and provide solutions which don’t judge content, but provide frameworks and scaffolding of skepticism and critical thinking skills around all content equally. That way we might start to create a global cultural change in how we consume information, and form our beliefs, and ultimately, address the core problem behind the misinformation problem — us and what we believe.

--

--

Shane Greenup

Founder of rbutr and dedicated to solving the problem of misinformation. Father, entrepreneur, generalist, futurist, philosopher, scientist, traveller, etc.