Objectivity and Algorithms

Traditional objectivity started in the early 1900s. It was first created by the North American print journalists, and it was later adopted by their Canadian colleagues (Wilkins & Christians, 2009, p. 73).

However; journalism and media ethics’ first key is acknowledged to be objectivity and reliability. Objectivity in journalism stories and media ethics is in particular avoiding opinions’ expression. Words such as claims and accuses express a value of judgment, and therefore, they cause disbelief, subjectivity and bias. What can be proven to be true is only to be reported, and all sides must be presented impartially (Brooks, Pinson, & Wilson, 2010, p. 232).

The concepts of truth and objectivity are criticized and outdated because of three factors which led to a severe conflict about the principles of journalism. The first factor is a destructive post-modern cynicism about objective truth. The second factor is how doubtful profit-seeking news agencies’ ethics are. The third and final factor is that non-objective journalism, mainly citizen journalists and bloggers, grew to be the best for an interactive media (Wilkins & Christians, 2009, p. 71).

Traditional objectivity has failed to ensure neutrality, balance and a proper representation of all voices in the society owing to the fact that traditional objectivity was in the “mainstream of broadsheet newspaper”. Theodore Peterson, a press theorist, claimed that objectivity is a “fetish” in the year 1956. Another party that led to the failure of traditional objectivity is the new forms of journalism, new technology and social conditions. It is said to be that there are three main reasons for the failure of traditional objectivity. First, it demands a lot of work, and therefore, it is considered difficult. Second, it is assumed to be undesirable, for it limits the freedom of the journalist. Third, it controls the free press, and thus affects democracy (Wilkins & Christians, 2009, p. 73).

A machine called computer algorithms is now, in the digital media, responsible for tracking editorial decisions instead of humans. Algorithms are a finite list of instructions that a machine performs in order to calculate a function. They are a string of commands programmed to execute defined tasks. Simply, algorithms are a small piece of computer (McBride & Rosenstiel, 2014, p.4).

Algorithmic media promised to restore and strengthen impartiality in news gathering owing to the fact that algorithms are stabilizers of trust, practical and symbolic and free from subjectivity, errors or attempted influences. Nevertheless; algorithms are not programmed and flawless by the interferences of their providers although they seem to be. In reality, information services cannot be absolutely noninterventionist in its delivery of information, for it is possible for an algorithm to evaluate any site according to how significant it is to one’s question. For instance, the results will not show in child pornography, they will not show in China if they are rebellious political speeches and will certainly not show in France if they endorse Nazism (Gillespie, 2014, p. 13).

It is the people who made the top findings on Google amid searching important. People are obsessing over the top Google results disregarding that they might find better results in the second or third page on Google. As long as people intend to only open links from the first page, the same results will always happen to appear there. People assume that the results on the first page are the only credible ones without taking the superiority of some narrative accounts of algorithms into consideration. As a matter of fact, this does not mean that Google is no longer a credible source of information it is just a problem in how algorithms evaluates results (Gillespie, 2014, p. 14).

A further example, and according to our class discussion, Twitter also commits an algorithmic fallacy. It takes the trended hashtags into concern according to how many tweets/retweets a subject has gotten, or how many times the hashtag has been used. To illustrate, in May 24, 2014, two events have occurred. The first was Kim Kardashian’s wedding. And the second was the occupy movement of Wall Street. Unfortunately, the minorities are those who knew about the occupy movement of Wall Street, for Kim Kardashian’s wedding tweets were trending and have taken over Twitter.

Appraisals such as algorithmic objectivities used in Google and Twitter’s cases cannot be justified. Less worthy topics are taking over much more worthier ones. Human judgment in editorial decision making is trust-worthier when it comes to its comparison with algorithmic objectivity. Humans, unlike machines, are able to critic news according to its worthiness and importance.

Instead of reinforcing objectivity, social media and search engines did not put algorithmic objectivity into practice. The traditional objectivity, as mentioned previously, is still the most accurate way in evaluating objectivity. Rather than reducing manipulation and bias in the news, algorithmic objectivity reinforced it instead. Algorithmic objectivity is taking the number of shares into importance rather than the value of the topic. And this is what precisely happened on Twitter in May 24, 2014, and what is still happening amid Google search. This concludes that algorithmic objectivity undermined the principles of neutrality and impartiality in news reporting instead of strengthening it.

References:

Brian S. Brooks, James L. Pinson, & Jean Gaddy Wilson. (2010). Working with Words. Boston: Bedford/St. Martin’s .

Christians, L. W. (2009). The Handout of Mass Media Ethics. New York: Routledge.

Gillespie, T. (2014). The Relevence of Algorithms. Media Technologies.

Rosenstiel, K. M. (2014). The New Ethics of Journalism. London: SAGE.