Fighting Fake News with the Free Market

Everyone knows that ‘fake news’ is a problem, but ask them to point to it, and everyone points in a different direction. The right says CNN is ‘fake news’, the left says Fox News is ‘fake news’. The truth is that both are well established legitimate news sources, and many of the stories covered by them describe the same facts and events. It’s almost as if ‘fake news’ has become less a description and more a pejorative — as if it doesn’t really describe anything at all. Yet everyone still sees it, everywhere they look.

The real ‘fake news’ charge isn’t about the facts. It’s about narrative, conclusion, assumption and the framing of the story. Run the headline that “Two Men Rob Bank in Broad Daylight” and no one calls it fake news. Tell the same story with the headline “Daylight Bank Robbery Proves Brexit a Disaster” and many people will do so. Many will, but not everyone. And since no two people see the world exactly the same way, no one can agree on what exactly is or isn’t ‘fake news’.

So how can the social media platforms fight a problem when no one can agree on what the problem actually is?

The answer is that they can’t. Twitter, Google and Facebook have all taken the issue very seriously, but their attempts to address it have been ham fisted at best, downright discriminatory at worst, and all three companies have suffered serious criticism and brand damage as a result of their efforts. They are lambasted in the press daily, both Google and Twitter are engaged in high profile lawsuits to defend their internal practices, and there is much talk in Washington and elsewhere of ‘breaking up big tech’ and regulating them as public utilities. But it wasn’t always this way.

Initially the social media platforms described themselves as ‘fair and open platforms for user contributed media’. That’s their business model and what they’re designed to do. But is there anyone in America who believes that’s what they are viewed as now? For the social media providers ‘fake news’ has been nothing but a PR and media image disaster. The simple fact is, they can either be ‘fair and open’ as their business models demand, or they can be the world’s online ‘speech policeman’ but they clearly cannot be both. So how do we solve the ‘fake news’ problem?

Fact checkers have tried to address it, but no sooner did they appear on the scene than their use was coopted by political activists as another means of advancing an agenda. These days even the fact checkers can’t agree on what’s fake and what isn’t, and have become as big a part of the problem as the news sources themselves. And since the issue was never really about the facts anyway, none of them really ever addressed the issue as it should be.

But the ‘fake news’ problem has a simple and straightforward solution. And we can achieve it by turning the ‘marketplace of ideas’ into an actual marketplace.

Our company, Clairety is designed to do this very thing. We are an AI driven rating agency that rates social media content for credibility. We read all the public content on all the social media platforms and integrate our ratings into social media with a browser module that inserts our content right into theirs. When you go to Twitter, Facebook, or the comment sections you’ll see right away who is speaking credibly, who isn’t, and you can decide for yourself how ‘fake’ is fake to you by using our system to filter away as many of the low scorers as you prefer.

What do we mean by credible?

Some information is more valuable to the reader than others. Facts are worth more than opinions; well supported opinions are worth more than unsupported ones, and our AI can tell them all apart. It can tell how much of the content is assumption, presupposition, and how much is fact. It can tell how far the truth is being stretched and how much of the content is simply unsupported statements or biases.

It’s obviously much more complicated than this simplified description makes it sound, but in the end our AI produces a statistic which broadly stated, can tell how far the facts are being stretched in one direction or the other. It can tell how hard the author is trying to convince you of something that the facts on the ground may or may not support. In short, it forms its credibility assessments in exactly the same way that we all do with each other in face to face dealings, every single day. The harder the facts are stretched, the less credible the stretcher seems.

But we go further than that to give you information you’ll find useful and practical. The big error that most people who are trying to solve ‘fake news’ make is that they set themselves up as the arbiter’s of ‘true’ vs. ‘false’ and tell you to accept their expert judgement. Even the solutions offered by the social media platforms used this ‘top down’ strategy. “Just trust us” they say, “and we’ll tell you what the truth is.”

We don’t do that. We never set our own view above the views offered by contributors. What we do is generate a ‘relative credibility’ score where the only thing a contributor’s content is ever compared to is the content of everyone else. Our views never enter into it in any way, and are never considered by our AI. In effect, our AI has no views of its own and has no way to either agree or disagree with anyone’s conclusions or narrative. All it cares about is how well supported your position is as compared to everyone else. We don’t claim to know ‘the objective truth’. We claim that the truth can only be truly objective, if it takes everyone’s view into account.

We have a Patent Pending working prototype that’s been running for nearly a year, and we’ve seen some stunning results. For instance, we were surprised to discover that virtually all bots fell into the bottom 20% of our scoring and they were ALL easily identified. We’ve seen that the NYTimes, the WSJ, Fox News, CNN, the BBC and a large number of other mainstream news sources all scored very close to one another on a persistent basis, and that score was consistently higher than ‘opinion’ outlets like National Review or the Huffington Post. We’ve seen the views that most people would probably think of as ‘extreme’ scored very low, be they on the left or on the right. And we’ve seen the highest scores come from ‘public information’ feeds like the Long Island Railroad where about 90% of the tweets are about trains that are delayed or cancelled, and no one has ever called that ‘fake news’. In short, the results we’ve seen match the intuitive human concept of credibility very, very closely. And we’re improving our process every day.

But we don’t just assume you’ll agree with our assessment. Instead, we show all this information to our subscribers, and put them in control. Our browser module comes with a filter that allows users to remove any contributors from their social media who score below a number that they choose. And when they decide where to set their filtering, they make the decision on what they believe and what they don’t. When our subscribers say “I don’t buy it” and filter someone from their viewed content, it means just that, and they are literally “no longer buying it”.

By choosing where to set their filter, they deny clicks and views to the content of the lowest quality and what’s left, is the highest quality, most responsibly reported news. That decision — where to filter low scoring content — is the thing that completes the ‘market feedback’ mechanism. It puts the user in control of deciding which news is fake and which is real based on their own individual viewpoint. It rewards the best ‘most credible’ content providers with the most active online engagement, while denying engagement to the click bait, the trolls, the harassers, the bots, and the kind of ‘extreme speech’ that everyone would rather avoid. We put the reader in direct control of who gets rewarded and who doesn’t. That’s how we make the marketplace of ideas into a real marketplace.

We aim to make the consumers of media the ones who control the direction of the narrative, not the government, some powerful minority, or a bunch of Russian bots. We all can agree on the facts. And by treating the marketplace of ideas as a ‘real marketplace’, we can all have our say in what the facts mean for each of us as well. We believe there are good ideas and bad ones across the whole political spectrum. And the only question that people need to answer to end ‘fake news’, is to know who is making the best of them, and who isn’t.

There is a lot more to it of course. And it’s explained more fully on our website: http://clairetyAI.com. Please go take a look. We’re planning to go to Beta some time after Q3, 2018, so we’re anxious to hear from you. We want you to be the one to help decide which news is ‘fake news’.

Tom Costello — CEO — Clairety Inc.