Algorithms “Objectively” Shaping Culture

Line Itani
7 min readMar 7, 2016

--

Media’s primary purpose is to deliver information to the people. Social media, specifically, offer their users a space where they can interact and share news or thoughts. However, the problem with media today is that it is no longer edited by human beings but by man-made algorithms, claimed objective, that operate on their own, once activated. That is, computer programs decide what to show the public as most relevant news and what to dismiss as unimportant. The question arises: do algorithms really defend objectivity or do they, on the contrary, create an imbalance and unfair representation of news?

At the base, algorithms are “encoded procedures for transforming input data into a desired output, based on specified calculations” (Gillepsie, 2012, p. 1). They make editing choices for people, because there is too much data circling around the media anyway. It thus judges “important” what is most clicked on, what appears to be of public interest. This may seem very objective and democratic — the people indirectly choose what they want to see -, but it is more complicated than that. Information that is most clicked on, or which seem to be relevant according to algorithms may just be the total opposite to a specific user. The evaluation which algorithms apply are thus based on quantity instead of quality, a direct punch at what information actually stands for. For example, having Kim Kardashian’s wedding outweigh Occupy Wall Street on social media is a very prominent example as to why algorithms should not be applied -or at least fixed in order to actually achieve objectivity. In this case, the voice of thousands of protestors, in one of the most important public movements in New York are shut up and replaced by the voice of E! Entertainment’s audience. Leaving out the fact that one piece of news is obviously more valuable than the other (for the civic society, at least), both news should be represented fairly to allow the public to view them. It is dangerous for algorithms to indirectly censor such information from the public, it renders people ignorant. In fact, they “are influencing the ways in which we ratify knowledge for civic life” (Gillepsie, 2012, p. 3), thus going all the way to controlling on a political and social level.

However, not only do social media decide what it is that we would be most interested in when scrolling down our feeds, even Google, a search engine, does that. According to a Google engineer, Google shows websites that seem to be most relevant, most credible, and most visited when deciding which websites to boost when Google is searched (Cutts, 2010). This, one could argue, limits our knowledge of other websites that may be of interest. But the problem does not stop there. It appears Google is actually tailored to each and every user. It does not simply choose which information to give out based on the search but also based on who is searching. So in a way, it judges who we are and what would interest us most. That is, it shows us “what it thinks we want to see but not necessarily what we need to see” (Pariser, 2011, 3:47) — thereby controlling what we see, and in which order. And yet, Google is still the most trusted search engine — it is the answer to all our question. Needless to repeat how dangerous this is. While we think the internet holds our freedom of speech and obtaining of information, we are actually constantly censored and limited with the information that appears to us. So not only is the information unfairly distributed, but it also is distributed differently to different people, thus creating a gap between what people obtain as information about the same topic.

What appears to be even more confusing is the fact that we cannot hold anyone accountable for that. Humans surely create algorithms, but the latter operate on their own, judging information on their own, and without any other interference. And if media companies are approached, they will claim algorithms are still more objective than human editors. But should algorithms really have a say in what shapes our culture? A dry, uncultivated computer program (that is very potent nonetheless) should not judge what we humans are supposed to and are going to see. Perhaps in order to attain more transparency and a fair representation to all information media, and especially search engines such as Google should ask its users what they are interested in, instead of simply auditing and deciding based on algorithmic stalking. This way, people will at least have a say in what information they wish to be exposed to.

References

Cutts, M. (Performer). (2010). How Search Works. Retrieved from https://www.youtube.com/watch?v=BNHR6IQJGZs

Gillepsie, T. (2012). The Relevance of Algorithms. Media Technologies.

Pariser, E. (2011). Beware Online “Filter Bubbles”. (E. Pariser, Performer) Retrieved from https://www.youtube.com/watch?feature=player_embedded&v=B8ofWFx525s

Media’s primary purpose is to deliver information to the people. Social media, specifically, offer their users a space where they can interact and share news or thoughts. However, the problem with media today is that it is no longer edited by human beings but by man-made algorithms, claimed objective, that operate on their own, once activated. That is, computer programs decide what to show the public as most relevant news and what to dismiss as unimportant. The question arises: do algorithms really defend objectivity or do they, on the contrary, create an imbalance and unfair representation of news?

At the base, algorithms are “encoded procedures for transforming input data into a desired output, based on specified calculations”. (Gillepsie, 2012, p. 1) They make editing choices for people, because there is too much data circling around the media anyway. It thus judges “important” what is most clicked on, what appears to be of public interest. This may seem very objective and democratic — the people indirectly choose what they want to see -, but it is more complicated than that. Information that is most clicked on, or which seem to be relevant according to algorithms may just be the total opposite to a specific user. The evaluation which algorithms apply are thus based on quantity instead of quality, a direct punch at what information actually stands for. For example, having Kim Kardashian’s wedding outweigh Occupy Wall Street on social media is a very prominent example as to why algorithms should not be applied -or at least fixed in order to actually achieve objectivity. In this case, the voice of thousands of protestors, in one of the most important public movements in New York are shut up and replaced by the voice of E! Entertainment’s audience. Leaving out the fact that one piece of news is obviously more valuable than the other (for the civic society, at least), both news should be represented fairly to allow the public to view them. It is dangerous for algorithms to indirectly censor such information from the public, it renders people ignorant. In fact, they “are influencing the ways in which we ratify knowledge for civic life” (Gillepsie, 2012, p. 3), thus going all the way to controlling on a political and social level.

However, not only do social media decide what it is that we would be most interested in when scrolling down our feeds, even Google, a search engine, does that. According to a Google engineer, Google shows websites that seem to be most relevant, most credible, and most visited when deciding which websites to boost when Google is searched (Cutts, 2010). This, one could argue, limits our knowledge of other websites that may be of interest. But the problem does not stop there. It appears Google is actually tailored to each and every user. It does not simply choose which information to give out based on the search but also based on who is searching. So in a way, it judges who we are and what would interest us most. That is, it shows us “what it thinks we want to see but not necessarily what we need to see” (Pariser, 2011, 3:47) — thereby controlling what we see, and in which order. And yet, Google is still the most trusted search engine — it is the answer to all our question. Needless to repeat how dangerous this is. While we think the internet holds our freedom of speech and obtaining of information, we are actually constantly censored and limited with the information that appears to us. So not only is the information unfairly distributed, but it also is distributed differently to different people, thus creating a gap between what people obtain as information about the same topic.

What appears to be even more confusing is the fact that we cannot hold anyone accountable for that. Humans surely create algorithms, but the latter operate on their own, judging information on their own, and without any other interference. And if media companies are approached, they will claim algorithms are still more objective than human editors. But should algorithms really have a say in what shapes our culture? A dry, uncultivated computer program (that is very potent nonetheless) should not judge what we humans are supposed to and are going to see. Perhaps in order to attain more transparency and a fair representation to all information media, and especially search engines such as Google should ask its users what they are interested in, instead of simply auditing and deciding based on algorithmic stalking. This way, people will at least have a say in what information they wish to be exposed to.

References

Cutts, M. (Performer). (2010). How Search Works. Retrieved from https://www.youtube.com/watch?v=BNHR6IQJGZs

Gillepsie, T. (2012). The Relevance of Algorithms. Media Technologies.

Pariser, E. (2011). Beware Online “Filter Bubbles”. (E. Pariser, Performer) Retrieved from https://www.youtube.com/watch?feature=player_embedded&v=B8ofWFx525s

--

--