Thoughts on the False News Problem

I thought — what would it take to create a “misinformation index” that contains a “false news” rank of websites and people.

The fake news problem sounds easy — it’s provably made up content. But how does a machine determine that? If it’s based on user feedback, you’d end up with Republican sources being flagged by Democrats and vice versa, Labour sources being flagged by Tories and so on. You can account for that bias and end up with a decent ‘fake news’ index. An automated Snopes, in a way.

But that’s just the tip of the iceberg. Fake news are just part of the problem. An even bigger problem are half-truths, speculations, ambiguous interpretations, unfalsifiable or unprovable claims, even conspiracy theories.

Sometimes speculations end-up being true, sometimes “known facts” end up being only partly correct, sometimes half-truths are more damaging that full lies, and even humans themselves are not good at grasping all the nuances, which makes me doubt that machine learning, even at its advanced stage, can be good at it. There isn’t a good, unbiased corpus to begin with.

So I think we’ll have to approach this problem the hard way — through critical thinking.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.