Fake News: Teasing out the problems with all of the solutions offered so far…

Shane Greenup
Jul 10, 2017 · 12 min read

Summary: Editorialising (filtering) the platforms which we use to find and discover content on the web (Google, Facebook etc) will end up dividing the world along ideological lines, and give us a population of passive, uncritical believers unable to think critically about the information they receive. This is the exact opposite of what everyone wants, but we are using immense amounts of money and man power trying to solve nearly impossible problems in order to get there. We have to re-think our approach.

Since the Workshop on Digital Misinformation back in April I have been working on writing on the topic of the fight against misinformation and Fake News online. I have been struggling to write it as clearly as I would like, so this article is a way to help move me along. Instead of a well edited concise article, I present to you a brain dump of my thoughts on this topic…. Enjoy!

The Obvious Problem: Categorising true vs false, and reliable vs unreliable, is hard to do

First, it is pretty clear that the problems people are working on are hard to solve. Everyone is working on ways to identify “Fake News”, or identify who is trustworthy or unreliable. These are difficult problems. It is not at all obvious what makes something overtly true or overtly false, or what makes someone reliable or not. Reliable honest people/media companies make mistakes. Dishonest people tell the truth. Fake news taps into real beliefs. Real news can be presented in manipulative ways to tell an incorrect story. All media outlets have their biases.

Solving these issues alone are virtually impossible in my opinion, but that is what everyone is working on regardless. Maybe most aren’t trying to fully “solve” them but instead be able to at least identify the extremes of each issue, and conclusively flag/remove/highlight those extreme cases. Which of course raises its own questions about where that line will be drawn, and how it will be enforced and maintained. History shows conclusively that these sorts of lines continually get moved to cover more and more circumstances. More and more information needs to be filtered and needs to be enforced more strictly.

The Overlooked Problem: What happens if we actually succeed? Is it actually good?

Here is the argument no one else seems to be making: If we successfully implement a systems on Internet platforms like Facebook and Google which accurately discerns between true and false, I believe we will end up with a much worse situation than we currently have.

There are two main reasons for this.

The first is that it promotes intellectual laziness rather than critical thinking.

That is, if your source of information is reliably able to give you true information, then the need to be skeptical and questioning of the information you receive declines. People raised in an environment where all information is controlled in this way will be trained to be passive receivers of information rather than the critical thinkers we need.

The second is that it will create massive, global, ideological division rather than unity.

This has a few parts to it, so let me explain….

Facebook is not a Newspaper….

First of all it helps to consider the difference between a platform (‘The Internet’ at one extreme, with Facebook, Google, and Reddit as perfect examples of platforms as I mean it here), and an editorial (the beliefs of an individual person at one extreme, with newspapers, media channels or blogs as perfect examples of what I mean by an editorial).

A very rough visual approximation of what I am trying to describe. Platforms are on the left, Editorials on the right. All efforts to have Google, Facebook and Twitter start to flag misinformation moves them all further to the right, into the realm of editorials.

All sources of information are editorialised one way or another. Fox news has their spin. Huffington post has their spin. Individuals have their cognitive biases and personal experiences which colour the way they interpret things. Within this ecosystem of competing ideologies, belief systems and interpretations of facts, we, as individuals, pick and choose which editorialised information source we wish to receive our information from.

D.I.Y. Echo Chambers

We have always lived in echo chambers. The key difference is that in the past we were so firmly trapped in our echo chamber that we didn’t even get to see that other echo chambers existed. Now we have this global connectivity to every other mind and belief system on the planet, and it is thanks to things like Facebook and Google that for the first time ever, we can so easily ‘look over the fence’ out of our own echo chamber, and see what “they” are all doing over in their echo chamber.

Human history is one of isolate meme-pools where conflicting ideas rarely had to compete with one another. The Internet has changed that, and now all meme-pools have merged into one global ocean of memetic competition.

So yes, we still have echo chambers, but now we construct them from the friends we choose and the blogs and newspapers we choose to read. We build our own echo chamber around us (of “True” information no less. Guaranteed. No one would ever build an echo chamber of lies. Who would do such a thing?) and then we use social media to whinge about all of those other people in their social media echo chambers.

The walls of our echo chambers have never been so small. Never been so fragile.

Things were probably much less confusing when most people lived in small villages and got 99% of their news by post. The further back in time you go, the easier it would have been to feel confident in your beliefs, backed up by everyone around you. Uncertainty, confusion and frustration at people constantly disagreeing with you, and saying or writing things which were “simply not true” probably wasn’t a problem you ever had to deal with in the past.

But seriously think about this for a minute: How much did that long history of unity and agreement amongst the local population help us actually hold beliefs which were true? Even if an entire nation of people agree that something is true, it does not mean it is true.

Conflict between ideas is how we progress. It is how we, as a global population, become less wrong over time.

Divide and Stupefy

Facebook knows this. This is why Facebook has been so tentative with implementing any methods to deal with Fake news. They don’t want to lose market share.

The top comment on Mark Zuckerberg’s letter about Fake News.

Facebook is meant to be a platform. A universal, neutral platform for everyone to share their thoughts, beliefs, experiences and lives. As soon as they start controlling what information is shared in their newsfeed (even if it isn’t absolute control, like deleting content, but only ‘flagging’ it, or applying a negative pressure to its edge score so fewer people see it), they will be exerting an editorial pressure, and they will be one step closer to being another Fox News, rather than a platform synonymous with The Internet itself.

It seems like it should be easy. “Fake News” is just a stupid thing which liars create to make money, right? Facebook should be able to just get rid of that (and nothing else) without offending anyone, right? No, not really. First, remember that fake news is incredibly popular! That is why we have this problem! People aren’t being made to share the stuff. They are choosing to share it. They love it. You have to remember that people believe CRAZY shit. And the second you start trying to remove anything, even if it is only the most crazy shit, you immediately start upsetting the people who believe that, or something similar to that. You effectively tell them: Your beliefs, and therefore you, are not welcome here.

And worse than that, try as hard as you want to keep your flagging/filtering system to only “Fake News”, and you will quickly find that there is no simple definition of fake news! There is no clear line separating fake news from manipulated headlines, and no clear line separating manipulated headlines from biased reporting. People accuse genuine high quality reporting of bias all the time, so suddenly we’re at a point where many people will view filtering of fake news as no different to censoring accurate reporting.

Don’t believe me? Just read the comments on every single article about efforts to fight fake news. You will see a train of people complaining that the solution doesn’t censor the real fake news outlets like “CNN” and “The Guardian” or whoever they dislike. You will also see comments that claim that these efforts are just there to silence “independent journalism”. These comments are always followed by some well intentioned individual attempting to explain why these beliefs are wrong, but it never helps. The people expressing these concerns, as wrong as they may be in their evaluations of true and false, reliable and unreliable, are still fundamentally right in their fear. Their beliefs are absolutely under threat. The things they believe to be true are exactly the sorts of things which Facebook and Google are being pressured to remove from their news feeds and search results.

Just one example from thousands of how people react when you announce that you are going to flag Fake news.

So what choice do those people have left, other than to find other “genuinely neutral platforms” to get their full, open-minded, and independent news from?

Sound crazy? Well, look at Wikipedia and Conservapedia. Exactly the same thing. Wikipedia wants to be a seen as a neutral platform for facts — but the second you assert a fact, someone can disagree, and then you have division and competition for ‘facts’.

This website is not satire.

How to Reinforce Your Echo Chamber…

To me the answer is clearly: No.

Success on those endeavours simply gives us another layer of editorialisation that forces populations into new (larger, reinforced) echo chambers where all of your news sources are fundamentally the same, and all of your extended social interactions are being fed the same sources and social opinions, and the fences that we look over in the modern social media wall, are now giant walls, separating entire communities.

If the tools we use to find information in the endless expanses of the Internet become editorialised, then all information discovery will necessarily be done through editorialised systems. How will we find and discover new information which hasn’t been editorialised for us? How will we know we aren’t being manipulated (even more than we already are) by how regulated our ability to access information is?

The especially ironic part of all of this is that the community which stayed in the truth-focused, misinformation-free platform will be the group of people most harmed in the end. I know it is counter-intuitive, but stay with me here. The smart people; the ones who know that Science is the best method we have for understanding the universe, and that evidence matters. Those people are the ones who will be trapped in an echo chamber which removes the need for critical thinking.

I know, it sounds far fetched. “I won’t stop thinking critically just because fake news has been removed from my news feed!” I can psychically sense you thinking. Sure, we won’t ‘stop’ critical thinking. We tend to not think critically already, so nothing will actually change.

The reality is that critical thinking and skepticality is something we have to actively practice. There must be an impetus pushing us to engage our critical facilities. We have to be reminded all the time to choose to think critically. We have to be rigorously trained to make a habit out of it, and then constantly reminded. There is nothing natural about doubting what is immediately in front of you.

And this world that we are working our way towards, with our fake news removed, and our validated trusted journalist services given priority, will never, ever, do that. We are all unconsciously working towards a world where we never have to doubt what we see.

It isn’t us that I worry about. It is our children.

You Don’t Learn Critical Thinking In School

Everyone who says “we need to teach critical thinking!” is missing the point entirely. We have been teaching critical thinking as best we can, for decades now. This is the best we have been able to do!

Saying “We need to teach critical thinking!” is about as helpful as saying “We need peace in the middle east!” it is both obvious, and achieves nothing. Provide a decent mechanism to ensure critical thinking is taught to the population at large, or stfu.

Well, here it is. This is how you teach critical thinking to the population at large: Provide them with an ever present environment where all ideas are constantly exposed to critical reflection. Make that the norm in their world, and they will incorporate that reality into their perception of the world.

If every idea is open to critique, then they will believe that critical reflection of all ideas is normal. If they regularly see critical analysis of ideas, then they will naturally learn processes and methods of critical analysis. They will become critical thinkers.

Want to achieve the opposite? Just build a world where all ideas they encounter are “True” and not challenging in any way. Train them to passively accept everything they read by only ever showing them how to critique information which exists outside of the filter — the untrue stuff — and reinforce the fact that this is a skill they don’t really need for day to day life.

That is the world we are currently working towards.

Solving Impossible Problems to Make the World Worse

I guess I have failed to mention so far that discerning true from false is invaluable in the editorial process. Journalists absolutely need to get better at that, and we do want that within that world. It is just in the platform world that this approach won’t work / will make things worse.

The Internet is neutral and unfiltered, which is a big part of what makes it so revolutionary and valuable, but without methods of discovering the content within its endless bounds, it is useless. The tools which help us find and discover that information must also be neutral and unfiltered, otherwise the Internet isn’t.

OK, I’m going to publish this as is now, because I need to get back to trying to write this out much better than I have here. Hopefully this article has simply helped me get the ideas a little more straight in my head. I still think I need to write several smaller more focussed articles on specific issues within this larger story, but then I need to pull them all together in the end.

Hopefully some of you will stick with me as I work through it all, because this is the most important problem of our time. Nearly every problem can be traced back to people believing things which aren’t true. It is about time we solved this at its core.

I look forward to any comments and feedback on the ideas I have clumsily vomited into this document. Hopefully my next few articles will be much more concise, better organised and much more readable than this one.

Shane Greenup

Written by

Founder of rbutr and dedicated to solving the problem of misinformation. Father, entrepreneur, generalist, futurist, philosopher, scientist, traveller, etc.