Algorithmic Media Bias

Flip through your television news channels; what do you see? Fluent yet short news reports. As a viewer, you can easily tune in and out of these superficially reported stories because of their limited duration and lack of depth. Knowing that journalism’s code of ethics is “built upon the two pillars of truth and objectivity” and claims to provide the public with impartial reporting (Ward, p. 71, 2011), one can notice how the scenario mentioned earlier can be problematic or considered a violation of the code.

In an “interconnected” world where news organizations would rather give the people what they want rather than what they need, there is little room for objectivity. Profit comes first. Television reports are now asked to cut-down on time on the air because time costs money. Other online news reports are asked to follow a certain structure that’s “appealing” to the tech-driven generation. Stories are “editorialized” and reported on the surface, bypassing all the facts and leaving out important details that allow readers or viewers to make their own stand-point on certain topics. It is even argued that objectivity is almost impossible in the news. Take the first presidential debate that happened on September 27th, 2016 for example. There were more news outlets reporting Trump’s false claims than Clinton’s. “Journalists should simply do their job, which is to report [all] the facts” (Krugman, 2016), but do they?

This is where algorithmic media comes in with its promise to strengthen impartiality in news gathering. Algorithms are formulas that transfer and re-order data (Gillespie, p. 1, 2012). Certain outcomes of these algorithmic equations are more likely to happen than others because some factors are weighted more than others; that is a form of algorithmic bias, but how bad could that be? It may not sound so bad. After all, we think of technological machines as incapable of love, hate, or any emotion; so, whatever bias they might have sounds more like a glitch to fix or a code they have read incorrectly. However, we are being exposed to more and more algorithms everyday as we continue to intentionally or unintentionally provide information online, be it on social media platforms or search engines. Through algorithms, this information is used to tailor our potential interests and hence, the information we see.

Following up on the example of the presidential elections previously mentioned, our social media feeds have become flooded with news reports and ads curated by algorithms to fit our ‘interests’, and I use quotation marks here to highlight the irony of the following situation. All I had to was like a meme that was disclaiming Trump as a legitimate presidential candidate. An innocent like turned my Facebook timeline red and blue (i.e. the colors used for Trump’s election campaign). The meme I liked was anti-Trump; however, my newsfeed was re-ordered and designed by an algorithm that supposed I was a republican or a Trump advocate. Was that an innocent mistake?

One of the most influential photographs is this black and white image of a young girl fleeing naked from a Napalm attack in the Vietnam war.

Recently, the algorithms that Facebook uses censored and deleted this image. “Any photographs of people displaying fully nude genitalia or buttocks, or fully nude female breast, will be removed,” said Facebook in a warning sent prior to the deletion.

After releasing a public apology and claiming that the photo should not have been taken down, Facebook received a lot of heat for its algorithmic system; it was criticized for being prey to partisan publications that spin the news into shareable yet inaccurate stories. This shows how technology and algorithms have become editorial decision makers that dictate how we regard news in our modern times even though they are inconsistent in their accuracy (Gillespie, 2014).

Think about all the stories that have been and still are continuing to be shared on Facebook or Twitter concerning the Syrian war, the refugee crisis, or the Saudi attack on Yemen... (etc). Will these posts be pushed back once the second presidential debate takes place tonight on October 10th, 2016? How do social media platforms use algorithms to showcase what is more important to their users? The short answer is: no one knows. Not one of the social networking companies has ever released information on how its algorithms work. But, what we do know is that these algorithms limit the quality and the quality of the information we consume because they draw-out attention to the most talked about topics.

The drinking game is apparently ‘a thing’
Priorities?

Twitter users tweeting about how they have turned the debate into a drinking game, or how they are making fun of the entire situation resulted in having #debate number one on the Trends list, overshadowing American football (#NFL) and the Saudi/Yemeni intervention (#الطايف_الان). “The slightest algorithmic tweak can shift the scale given to one topic over another” (Lotan, 2014) which may not fit the calculative logic of the algorithmic code. Another example can be search engines like Google or Bing. Unless directly programmed to do so, the Google News algorithm won’t play favorites when picking representative articles for your search, but one of its criteria for choosing articles is “frequency of appearance.” That criterion could make Google News’ output appear more partisan than neutral.

It may be easy to believe that, because algorithms are calculative, they are somewhat more “objective”; however, such calculative systems are the most insidious since they are often unnoticed and unquestioned. This explains why algorithmic biases are silently dictating what we see in one way or another. A suggestion to minimize the algorithmic harm is creating regulations to govern the algorithms through routine auditing by an expert agency, such as the IRS does with taxes, on an annual basis. Hypothetical or non-hypothetical scenarios can be run to assess whether algorithmic results come out biased towards a certain race, political party, gender, religion… (etc). That results in cutting down the possibility that algorithms infringe on objectivity, civil rights or impartiality. Until then, if you ever feel you’re having trouble finding the information you need on the first page of Google’s search results, go through the other pages. You never know when an algorithm might fail you!

References:

Ward, S. (2009). Truth and objectivity in Wilkins & Christians (eds.) Handbook of Mass Media Ethics, Routledge, London, New York

Lotan, G. (2013.) Networked audiences. McBride, K. & Rosenstiel, T. (eds.), The new Ethics of Journalism , Sage, London

Gillespie, T. (2012). The releveance of algorithms. Media Technologies, Boczkowski, P. and Foot, K (eds.), Cambridge, MA: MIT Press

Gillespie, T. (2014). Facebook’s algorithm — why our assumptions are wrong, and our concerns are right. Culture Digitally. Retrieved from http://culturedigitally.org/2014/07/facebooks-algorithm-why-our-assumptions-are-wrong-and-our-concerns-are-right/

Krugram, P. (2016). The lying game. The New York Times, The Opinion Pages. Retrieved from http://www.nytimes.com/2016/09/23/opinion/the-lying-game.html?ref=international

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.