Redefining News for the Digital Format

The latest 2016 US election cycle has exposed in glaring light the real consequences of not finding a solution by which the common person can now reasonably distinguish between news, opinion, and blatant misinformation meant to intentionally mislead readers from educating themselves with facts by which to form their own decisions. Here are two well written articles in Medium which highlight some of problem.

While all sides of this election cycle were guilty to some extent or another of capitalizing or hiding the truth, or at least morphing their opinion to look like like genuine and real news (objective) rather then merely as conjecture or opinion. Clearly one side embraced and mastered this technique far better then the other, and then proceeded to pump out such content in sheer volume into the multiple social media feeds for our dissemination. The Big Lie; repeated often enough, and from what feel like different sources, until such a point where many (or enough) can no longer distinguish fact from fiction.

How could anyone reasonably imagine fact checking such a volume of information crossing our news feeds in today’s world? Or ever trust any one singular organization of unbiased fact checking for them? Yes there were some laudable and conscientious unbiased attempts at this by reputable sources, but just as many biased ones appeared to create and sow even more confusion. Left unchecked, this problem will only further grow as all sides master this dark art of misdirection and misinformation intent on misleading and altering our perceptions of the truth.

What today’s reader needs is a means of quickly evaluating news; does it come from a credible source of which there is an actual consequence or cost of being wrong? Is it actually credible news, or mere conjecture and opinion. What weight do people I at least should trust put on this source of information. So why can’t news organizations be issued what amounts to a FICA score the same way people are rated by their credit risk? Governed in a way that was equal part peer reviewed, and even possibly publicly reviewed. Where both industry and readers play a part in collectively providing input on the very premise of credibility in material presented to us as news.

Where to even get a score you first had to prove you were a legitimate news agency and not merely a think tank posing as a news agency, or worse, a government or political party. Where their score was then driven by your actions, and the more they got it right, the higher the score, and the price of disinformation or non fact based opinion lowered their score. Where any agency or entity wishing to release information as news to any form of social media had to attach its score , and where if it didn’t match or exceed the posted score from a central peer reviewed and FICA like online database, it could be bounced or only passed along without any score attached, where we viewed but at our own risk. Certainly any reputable agency with an earned high score could still pass off any one piece of given information incorrectly or misleadingly, but only at a cost that it would later impact their score when peer and publicly reviewed.

Imagine being able to open an article and at a glance know if it was from an accredited news source whether you were familiar with it or not, and more importantly, how much weight you could put into the reliably the information it contained was in fact factual. Better yet where you as the individual could set or adjust your social media app to thresholds by which to screen the information which even reached you, or at least flagged you to let you know the information wasn’t from a rated news source. Where information being floated or manufactured by PAC’s, special interest groups, or even governments couldn’t be slipped to you as factual news.

Make no mistake, any system can be manipulated, and no one gets to own the truth, but those that work hardest to earn our continued trust just as with any FICA score, should be in fact rewarded for getting it right, and those that don’t should be punished, because it does and should matter. This isn’t about censorship as in excluding content, nor does it make anyone one piece of information true, it’s meant to speak solely and simply to the credibility of who is claiming this to be news and not merely entertainment or opinion.

Ultimately we all hold both a an individual duty, and responsibility to judge for ourselves whether something being presented to us reasonably represents the truth, and wether it should be believed or not, but having a tool or model by which at least aided us in evaluating the very source has both value and merit.

The details of how such a system could be fairly implemented, regulated, and maintained, I leave for much brighter and gifted minds far beyond the scope of what I am trying to suggest. I just feel the cost of not attempting to solve this problem ultimately has serious and real implications. We need to strive to set the bar high, to create even basic barriers of entry in an attempt to curtail the proliferation of sources, but above all establish that real cost of getting it wrong.

The resources are out there, both in the talent required, and the capital to make a serious dent in this issue before the next time it really matters. If we can’t learn there should be rules, and real consequences to breaking them, we will be forever condemned by an ever increasing degree of misinformation intent on creating an almost denial of service attack on our quest for the truth. So I would challenge others, how might such a system be generated to better help us navigate between fact and fiction?

This is a start, but doesn’t this just empower Google and Facebook with an even greater power to decide what we even see rather then empowering us with a tool by which to better just judge what we see?

Google and Facebook both have a roll in being part of the solution, but they knew this problem existed and failed to address it before it really mattered, and allowing them to simply censor content rather then merely labelling it for use to decide for ourselves seems almost scarier then the problem itself.

The debate over the such damage is being waged and brought to light, but the solutions being discussed thus far don’t empower the masses with knowledge to distinguish between fact and fiction, and rather feel others, or worse algorithms can make better judgments on our behalf. That to me sounds like censorship by any other name, and will only condemn us believe what we told. No good comes from this path.

Update: The dialog is getting better as more and more people become educated to the risks, and real consequences associated with the loss of credible news (or at least lost in the sense of a sea of misrepresentation). A solution must be found, and a consequence or cost of getting it wrong needs be incorporated in the solution.

Update: The story continues to gain traction and genuine dialog, but also censorship continues to rear it’s ugly head; nobody should given the power to determine what we see. The solution should not be shroud ourselves in darkness, but rather enlighten us with the means to better see and determine the truth for ourselves.