FVSO: A More Neutral Approach to Media

Image for post
Image for post

FVSO is a Web extension that promotes news literacy and critical thinking by quantifying the volume of objective versus subjective language in the media. It’s a tool that allows the user to view headlines from a range of sources side by side, on a spectrum that ranges from totally objective to totally subjective. The goal here is to showcase the differences in levels of objectivity and subjectivity in the headlines and articles of popular news media sources in a way that gives the reader a full awareness of any potential bias they have when choosing their new source. We also hope for users to think more critically about the sources they select, and to become more attuned to authorial bias in the news that they read.

The Problem: Media Bias and Assumptions of Credibility

The New York Times believes that trust in mass media has fallen to alarmingly low levels. The advent of social media and its confluence with news reporting has had a significant impact, since the overlap tends to blur the line between objective and subjective language. Social media algorithms have made it increasingly easy to surround oneself only with news that reflects what one already believes. These algorithms are designed to display content most relevant to the user’s preferences.

When an original political leaning of the reader becomes magnified and entrenched by social media algorithms that “know” what the reader likes to hear, it is very easy for the reader to start viewing opposing news sources as “false” or lacking in credibility. Hence, we seek to address the issue of mass mistrust in the media, which stems from readers’ perceptions of low credibility in news that reflects an opposite narrative than what they prefer, or are accustomed to, reading.

Initial Research: What Makes a Critical Reader?

We discovered data from the Pew Research Center that shows that a key step towards readers’ “internalization” of the news they consume depends on whether they can immediately identify a given statement as fact or opinion. Naturally, statements that are capable of being proved or disproved via objective evidence are considered factual; statements that reflect the beliefs and values of the expresser constitute opinion. Studies show that this basic step was more challenging than initially expected. In fact, the study found that a majority of Americans succeeded in correctly labeling three out of five of the statements given to them. However, few Americans correctly guessed all five, and approximately a quarter of them guessed incorrectly on all. This difficulty in Americans’ ability to simply discern fact from opinion proved a great starting point for our team when thinking about how we could improve readers’ critical thinking skills.

Image for post
Image for post

This soon became our driving question: how might we increase the critical reading ability of those who read the news? We know that a reader’s willingness to accept a piece of news relies heavily on their ability to discern fact from opinion. Thus, going back to the criteria mentioned above that characterizes a factual statement versus an opinionated one, we decided to create a news literacy tool that could differentiate objective and subjective language.

First Attempt

Our team faced a couple of issues in the creation of our media analysis tool. At a basic level, we knew that we wanted to establish a language spectrum of objectivity and subjectivity, and to provide a visual ranking of sources that fell into either category. We also knew that we wanted to display these sources side-by-side in order to provide the user with an array of options that they would not have normally seen when perusing their usual sources. To do so, we considered flagging headlines with political buzzwords and “emotion-heavy” language, then separating these headlines into objective and subjective analyses. However, we were unsure of how to universally define loaded language. Ultimately, we addressed this issue through the use of sentiment analysis, which is a type of algorithm that distinguishes factive language from speech polarity. An example of a factive language might be: “Iran calls for solidarity against pandemic, condemns U.S. sanctions” while its intensifier counterpart might read something like: “Trump Should Forget Iran. America Has a Pandemic To Handle”.

We also wanted to clarify to users that our tool aimed to distinguish not between “fact” and “opinion”, but rather “objective” and “subjective” language. Once again, the qualifiers “objective” and “subjective” classify individual words or phrases in a statement that cause the reader’s brain to deem the entire statement as either “fact” or “opinion.” The distinction means that our tool looks for subtle differences in a sentence’s language structure rather than seeking cohesive statements that would register as whole facts. We also struggled with color scheme; we didn’t want to use red and green, which may have created unintentional bias for the “objective” or “subjective” categories. Consequently, we settled on an orange and yellow color scheme, with orange for subjective and yellow for objective.

The Solution: Headline Indexing

Our solution centers on headline indexing: the tool takes a single headline inputted by the user and color-codes words according to their subjectivity on a scale from 1–5 with 1 being the least subjective, and 5 being the most.

The scale is designed to summarize and express the nature of the headline at a glance.

Image for post
Image for post

The tool provides a breakdown of the article and its origins.

The tool then gives a detailed analysis of why the headline scored as it did and highlights the language issues that support the score. This is so that how our processing tool works is completely transparent to the user.

Image for post
Image for post

The tool then suggests alternative headlines and their scores from other sources with the same topic. The user can then compare and understand how language can change the meaning of the content.

Image for post
Image for post

The goal is to encourage users to think more critically about the news they read.

The solution is therefore of four main parts that must be built:

  1. A search engine where users can input an article or a topic.

2. An analytical algorithm and scoring system with its own page

3. A comparison article generator that can suggest other news sources

4. An informative page outlining our methods and our dedication to transparency.

How it Works

The FVSO prototype is hosted on a Python Flask server and is powered by artificial intelligence. The FVSO web server is connected to an artificial intelligence extension that uses the article or keywords provided by the user to first return a list of articles with similar content. Internally, a scraping program is in charge of fetching related news headlines from the web. This scraping program browses news articles on the web and ranks them according to their similarity to the provided keywords.

Next, the user’s picked headline along with other selected articles’ headlines are analyzed to determine if they skew more towards subjective or objective information. The sentences are analyzed by PyTorch to determine if the article headlines’ skew more towards subjective or objective information. PyTorch is an artificial intelligence library created by Facebook, which can be used to build a network of virtual neurons, in our case called Recurrent Neural Network (RNN) and more specifically a Long Short Term Memory (LSTM) network. This model of artificial intelligence is widely used for text analysis such as speech recognition, translation, sentiment analysis, etc. This type of RNN stores and leverages knowledge about past words. This means that based on previous words in a sentence or in the article corpus (stored in the memory), our LSTM model is capable of first, predicting what words come next, and second, assigning an opinion rank to words based on their context. These decisions surrounding the network architecture and algorithm structure are incredibly important to creating accurate objective subjective language analysis, since these distinctions are impossible to make on a word by word basis, without the context of the sentence as a whole.

Once we obtain the results from this algorithm, the server transmits them to the user-facing application and React displays the information in an interactive user interface.

Looking Ahead

By considering the language criteria for a statement to be internalized as fact or opinion, composing our algorithm focused on sentiment analysis and news type, and envisioning a design that carries no implicit bias towards either objective or subjective language, we arrived at FVSO. In the near future, we aim to take this project further by developing an API and to share with a range of news platforms that also champion the importance of critical thinking.

Through our product, we hope to heighten users’ critical literacy by emphasizing the difference between objective and subjective language in news media. We also hope to broaden the scope of media consumed, to encourage readers to explore different outlets and embrace a diverse range of views that are dissimilar to their own.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store