Access to information in the algorithmic age

By Jens-Erik Mai, University of Copenhagen, Denmark.

--

One thing that concerns me these days is whether the moves toward more personalized information provision we see on the web is a good thing. While researchers in information studies have dreamt about developing systems that provide personalized, relevant, and timely information for a long time, and while there is something convenient about receiving only the information that I am personally interested in, I can’t help but think that this is not all good. I wonder at which cost does this personalized information provision come?

The power is select

One of the core values of the modern Western library is that their services are provided anonymously and that users’ anonymity and privacy is secured. Users are treated equally regardless of who they are; they are provided the exact same information to the same request - previous interactions with the library play no role in the provision of information. The only issue that matters is whether there is a match between the information’s subject matter and the subject matter users express an interest in.

1045 by x6e38 is licensed under CC BY 2.0

When the internet - and later the web - was first developed, the hope was that it would be an emancipative technology that support the free flow of information in society (Zittrain, 2008). Today, however, information provision on the web has become dominated by a few private players (Wu, 2010) with an enormous power over people’s access to information about the world (Pariser, 2011; Vaidhyanathan, 2011). In fact, information provided to people in today’s society depends on whom they know, what they are interested in, what they have talked about, and what they ‘like’.

Libraries and search engine possess immense powers. They have the powers to decide which information to include and exclude from their search results, how to describe the information, how to make the information available, and how to rank the search results (Wilson, 1968; Pasquale, 2015). As such, these institutions have “the power to ensure which public impressions become permanent and which remain fleeting” (Pasquale, 2015, p. 61); they hold the power over which information people are provided. While contemporary search engines are difficulty to grasp technologically and most are trade secrets, it is important that we appreciate the immense impact they have on society as “they have become the default mode of knowledge acquisition” (Chun, 2016, p. 1). As a society we have therefore an obligation to study, understand, and discuss the principles by which these institutions make information available to us.

We can only but wonder what kind of society will we get once information provision has been commercialized and more or less solely in the hands of a few private companies? At this point in history, decisions will be made whether we will continue to have systems for information provision that are built on the values and ethics of the shared, the common good, and the greater public, or we will only have information provision provided by private companies whose interests primarily are to sell advertisements and only act as information providers as a means to that end.

Privacy?

The ability to provide precise, relevant, and context-dependent information requires that information institutions have an exact, complete, and accurate profile of their users. This profile is often constructed via complex predictive analyses of personal information harvested from users’ interactions with various digital platforms across their everyday activities.

Algorithms not only help locate and find information, they select information…

Algorithms not only help locate and find information, they select information, sort it, rank it, and determine the precise information that is relevant for the specific user at that particular time and place (Gillespie, 2012). To function effectively, algorithms are fed personal information about the specific users; the more information they have about the specific users’ situation, preferences, history, and relations the better and more effectively they will function. As social networking sites and search engines continue to integrate, “networked citizen-consumers move within personalized ‘filter bubbles’ that conform the information environment” (Cohen, 2013, p. 1917). Users are thereby not merely presented relevant information, they are only provided information that is relevant to them personally in their specific situation and context.

I wonder if the cost of personalized information provision are not just ‘bubbles’, but also the end of privacy? Will the future be one where I will more or less automatically be provided information based on my personal profile? One in which I never again have to articulate my need for information, because the system already knows what I am doing, reading, concerned with, and wonder about and therefore simply provide that information to me? Is this really what we want?

References:
Cohen, Julie. 2013. What privacy is for. Harvard Law Review, 126: 1904–1933.

Chun, Wendy Hui Kyong. 2016. Updating to remain the same: Habitual new media. Cambridge, MA: MIT Press.

Gillespie, Tarleton. 2012. The relevance of algorithms. In Media Technologies, T. Gillespie, P. J. Boczkowski, & K. A. Foot (eds). Cambridge, MA: MIT Press.

Pariser, Eli. 2011. The filter bubble: How the new personalized web is changing what we read and how we think. Penguin.

Pasquale, Frank. 2015. Black box society: The secret algorithms that control money and information. Cambridge, MA: Harvard University Press.

Vaidhyanathan, Siva. 2011. The googlization of everything. Berkeley, CA: University of California Press.

Wilson, Patrick. 1968. Second-hand Knowledge: An Inquiry into Cognitive Authority. Westport, CT: Greenwood.

Wilson, Patrick. 1968. Two kinds of power: An essay on bibliographic control. Berkeley, CA: University of California Press.

Wu, Tim. 2010. The master switch: The rise and fall of information empires. New York, NY: Alfred A. Knopf.

Zittrain, Jonathan. 2008. The future of the internet — and how to stop it. New Haven, CT: Yale University Press.

About the author:
Jens-Erik Mai is professor of information studies at the University of Copenhagen. His work concerns basic questions about the nature of information phenomena in contemporary society — he is concerned with the state of privacy and surveillance given new digital media, with classification given the pluralistic nature of meaning and society, and with information and its quality given its pragmatic nature.

* * *

--

--