Algorithmic Media

Collaborative thinking is “a method of making automatic predictions (filtering) about the interests of a user by collecting preferences or taste information from many users (collaborating)”. One of the collaborative techniques used by media outlets is algorithms, which are programmed to decide the news we read, the advertisements we watch and other information.

It is widely assumed that algorithms hold objectivity since they are computer based and free from human bias. But nowadays there is growing evidence that it is not, algorithms eventually catch on to the bias’s us as humans have and give us (in the case of this paper) the news that we want. At the end of the day it is humans that create, write and follow up on algorithms that ultimately regulate according to the viewers reaction or feedback.

Any process that requires a decision, being a human or an algorithm, most likely contains a bias. The mere fact that some media outlets, using algorithmic techniques, remove or eliminate some news that the algorithm considers irrelevant or unimportant suggests that the news isn’t completely objective.

An example to begin with is London-based startup Summly. Summly chooses the most “important” parts from articles and summarizes them on its platform. The definition of “important” is extremely relative thus making their algorithmic system somewhat biased. What is “important” in a story about ISIS attacks on Iraq might differ from one person to another.

News outlets that use algorithmic techniques sometimes disregard certain news thus not giving a voice to everyone in society. If we take Lebanon as a case-example, one news medium might have a larger readership from the South and not give as much importance to news happening in the North thus sometimes not publishing a protest in Akkar for example. This would obscure the attempt of protestors to call attention on their problem.

When the print world was more dominant biased media was plainer to determine allowing the reader to choose the bias he wants to read. Nowadays, viewers and readers do not necessarily know how the algorithm system is working and can’t see what filters are being implemented on the news they are looking at. They receive news as it is given to them and can’t choose content according to intellectual and critical choices rather the content is already formed according to previously influenced choices turned into algorithms.

A reader can take role in decreasing the control algorithm have on the content he receives by searching for news in an incognito browser where the information won’t be stored.

Other solutions found online that can lessen media bias from algorithmic personalization include “NewsCube, a Web service which automatically provides readers with multiple viewpoints on a given news item, and Balance, a research project at the University of Michigan that seeks to diversify the result sets provided by news aggregators (such as Google News)”.

Media outlets using collaborative techniques and algorithms should come up with a system that allows the algorithms to be transparent so that the readers and viewers acknowledge the way the content is forced upon them. Institutions should create policies that ban algorithms which base news according to terms that are relative or even that determine whether a news piece is important according to the number of views. The criteria on algorithms should be studied deeper to allow all voices to be heard and provide absolute objectivity.