The Pew Research center is a “. . . nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research.”. In a world where our current president has been after the worlds most used search engine, accusing Google of political bias, a nonpartisan site can be an important when looking for the facts. When looking to form opinions we look to science to give us facts on a situation. We also may rely on authority, someone who we recognize is qualified after years of study and practice. We have also learned to look at news headlines. These news headlines however, are not giving just facts, they are giving a story. A story can sound very, very different depending on who’s telling the story.
One chart that has resurfaced in the rise of talk of political bias is a “Media Bias Chart” (see here at Ad Fontes Media). This chart places different news sources along a chart based on political standing in the articles and how factually based they are. The blog post following the chart discusses the creation and thought behind it. The company goes sentence by sentence in articles, coding them in three different ways. The “veracity metric” sorts the statement on how factual it is, scaling the sentences into five groups from “True and Complete” to “False”. The company uses pre-existing fact-checking, stating that, “Since there are many reputable organizations that do this type of fact-checking work, according to well-established industry standards, (see, e.g., Poynter International Fact Checking Network), I do not replicate this work myself but rather rely on these sources for fact checking”. The statements are then sorted into one of 5 categories ranging from fact to opinion, and then sorted between fair or unfair. The article holds the list of statements that qualifies a sentence as “unfair”, including things like ad hominem or facts not relevant to the present story.
One thing that intrigues me is that Ad Fontes Media does not release the raw data and the weighted data of the rubric grading they do, stating at the end of their “How Ad Fontes Ranks News Sources” page that “the data set is currently not large enough or in a uniform enough format for public production”. It seems unusual to me for a data set to not be large enough for public production but large enough for the Media Bias Chart. I myself could not find any of the “sample sets” the site says it releases for transparency, but that could be personal error as well. The site discusses not releasing algorithms either, as they are proprietary, making it hard to see how they are weighting the ranking.
One issue that arises in the most scientific of experiments is coding reliability, and should be calculated for in such experiments to keep results as honest and real as possible. Ad Fontes Media does not appear to have such a coding reliability coefficient, instead stating that, “you can get to good results as long as you have standards on how to judge many granular details, and have experts that are trained on such standards implementing them. We’ve begun to create that process here”. People are only human, and the people working on the project, “currently consists of *just me* for content analysis, plus a handful of awesome helpers and advisors” the owner of Ad Fontes states, which makes the lack of a coefficient for coding reliability concerning.
Now this chart is not marketed as a study. Ad Fontes Media isn’t marketed as the “nonpartisan fact tank” that the Pew Research Center is. However, this Media Bias Chart is out there and, according to an article by Market Watch, has “gone viral, with thousands of educators at both the high school and college levels using the compelling visual”.