How to fix our news ecosystem
Step 1: Acquire more data! (attention metrics & credibility scores)
The news industry is hemorrhaging money and attention. Specifically the credible part of the news industry is failing—the part that sometimes pays people to do actual reporting, and publishes with the intent to inform readers (the part that aims to outrage or mislead is doing fine).
Everyone who has been following the news industry knows this — traditional news organizations have been squeezed by massive layoffs. Important reporting often doesn’t get read unless it is also paired with a controversial or even misleading headline. So these traditional publishers are ignored, are supplanted by content farms, or just give in and start churning out less credible work. More recently, “fake news” and hyper-partisan sites have entered the picture as particularly pernicious forms of content farms, taking public discourse from bad to catastrophic.
But in all of this handwringing, we don’t know even the basics about what people actually read, and how credible that information is. How can we adequately consider solutions if we can’t even measure success or failure?
Imagine you work at Facebook and are experimenting with changes to the news feed. You know that currently it rewards sensationalist misinformation over sensible accuracy, so one of your goals is to reward quality and accuracy over clickbait and misinformation.
Sure, there are some things to try that might help a bit. But how do you know if those changes are actually having an impact?
Here are three key questions that we have to answer if we want to resuscitate our media ecosystem.
- How unhealthy is the news ecosystem really? (and how can we meaningfully measure that?)
- How quickly is this getting worse? (if at all?)
- What publishers, platforms, and other stakeholders are the worst actors? (and how can they be held accountable and incentivized to change?)
Without having answers to these questions, we cannot seriously consider long-term, meaningful action to improve things. We can’t just trust our gut that “things are getting better” in the age of the filter bubble.
We need real metrics and real data. The tricky part is that this data may not yet exist.
The missing data
If we want to get the pulse of the media ecosystem — if we want to answer those three questions—we need to know how much of the content being consumed accurately informs the public. Breaking that down, we need data on two crucial things:
- What content people are consuming.
- The accuracy of that content.
By tracking those two things, we will know if and how the media ecosystem health is taking a nosedive—or if a particular intervention or experiment is working. This is a bare minimum. There is plenty more that it would be valuable to monitor, such as the direct impact of news on society, but the accuracy of what is consumed is a fundamental pre-requisite.
The first part isn’t too difficult, there is a decent amount of public or semi-public data on what content is being consumed (for example Facebook shares, YouTube subscriptions, and Alexa rankings; though this is much harder for closed platforms like Snapchat which don’t release data).
The harder part is determining the accuracy of content (or even some level of confidence about the likely accuracy of content). At this point, most people throw up their hands, declare that this is impossible to measure, and go back to complaining about how news is dying (sometimes adding that consumers need to be smarter). But that’s a cop-out. A deteriorating media ecosystem also deteriorates our civic institutions — our way of life. We need this data.
The status quo
The Observatory on Social Media at Indiana University uses this list to figure out which websites are suspicious enough to track:
Clicking through to the “sources,” few if any have more than a sentence or two of justification. There is little to no methodology behind any of these. The “best” right now is opensources.co which still has no evidence trail, no way to ensure comprehensiveness, etc.
This isn’t good enough. An ideal solution would also be democratized —in the sense that no news organization gets a free pass—even if it happens to be old or prestigious . This means using a more nuanced classification and scoring system, in order to represent degrees of accuracy. And it requires scoring not just sites which occasionally publish explicitly “fake news,” but all news sites (with enough traffic).
We especially need a methodology that is reproducible — precise enough that no matter who is following a process, they get a similar categorization or rating. While we can use automation and algorithms to get rid of much of the grunt work, some interpretation will need to done by a human for the foreseeable future (e.g. calling sources to verify a sketchy account) and we need to be diligent about eliminating biases for those tasks.
We can do better
With defensible credibility scores for news content, we can answer those three questions. We will be able to monitor the health of the news ecosystem. We will be able to name and shame delinquent publishers. Most importantly, platforms like Facebook will have the data they need in order to improve their products and algorithms—for they also want to stop incentivizing sloppy journalism and misinformation.