Recommender Systems and their Impacts on Autonomy

Nicholas Simons
9 min readDec 17, 2019

--

Human-AI Interaction

Original Image

Recommender systems are one of the most ubiquitous manifestations of artificial intelligence today. Across various platforms, such systems suggest to users the media they should consume, the products they should purchase, and even the people they should meet. It may seem melodramatic, but in this way, these systems have the power to mold our very lives. Through predetermining the things that we are exposed to or not exposed to, these systems can shape our knowledge, belongings, relationships, and experiences.

Of course, one might feel that whether YouTube decides to recommend a video about metalworking or a video about Persian history to them does not matter in the short term. However, ethical questions arise when the effects of recommendation are applied over time and over large populations. Though long term effects are not yet apparent since AI recommender systems are still in their infancy, philosophers and technophiles alike have begun to question their far-reaching consequences. Silvia Milano, Mariarosaria Taddeo, and Luciano Floridi of Oxford University identified six areas of ethical concern in recommender systems: user privacy, impacts on autonomy, openness and opacity, social bias, social manipulation, and the recommendation of ethical content itself [4]. One may find discussions about many of these issues across academic articles, opinion pieces, and online fora; the focus here, though, will be on recommender systems’ impacts on user autonomy. In particular, I argue that the creators of such recommender systems must mitigate the degree to which those systems impede autonomy.

What is a recommender system?

To fully understand the ethical implications of recommender systems, it is helpful to first understand the basic underlying features of such a system. Although recommender systems may differ greatly in terms of their technical implementation, most follow a pattern of three steps to complete recommendations. As defined by F.O. Isinkaye, Y.O. Folajimi, and B.A. Ojokoh, these steps include an information collection phase, in which the system gathers data to create a model of a given user; a learning phase, in which the system, through machine learning, determines how that data may best be used and applied to the model; and a recommendation or prediction phase, in which the system provides a user with suggestions [3]. This process occurs continuously and in a cyclical fashion such that feedback gained from the prediction phase may serve as later input for the information collection phase [3].

Recommendation Process from Isinkaye et al. [3]

Beyond these three steps, recommender systems all employ filtering techniques to determine what information to suggest to users. The three primary filtering techniques are content-based filtering, collaborative filtering, and hybrid filtering [3]. Content-based filtering refers to the technique by which a recommender system that determines what items to suggest by comparing a given item to a user, and, more specifically, other items that that user has purchased, used, consumed, or rated in the past [3]. For example, a video streaming service might compare a given video to other videos that a particular viewer has rated positively in order to form its recommendations. On the other hand, collaborative filtering occurs when a system forms its recommendations by comparing a user and the items he or she has evaluated to other users and the items that they have evaluated [3]. An online store, for example, might employ collaborative filtering by comparing one user’s profile to that of other users who have purchased similar items, and make recommendations accordingly. Finally, the hybrid filtering technique utilizes a combination of features from different types of content-based and collaborative approaches to attempt to account for varying tradeoffs [3].

Throughout this article, it is my goal to address the points in the recommendation process, regardless of filtering technique, that best facilitate applications of ethics, such that autonomy may be preserved.

Deception and Coercion

Each recommendation that an AI system makes has the potential to push a user in a particular direction in terms of choice. This undoubtedly affects autonomy. Christopher Burr, Nello Cristianini, and James Ladyman identify that such systems may impact human autonomy through either deception, coercion, trading, or nudging behaviors [1]. According to these authors, deception involves a system either misrepresenting the outcome of a user’s decision to follow a recommendation or misrepresenting the value of or reason to follow that recommendation in the first place [1]. Deceiving recommender systems often arise in the form of clickbait advertisements that lead users to believe that an article has content of a certain value, only to find out that this is not the case. Alternatively, systems may apply coercion to interfere with users’ autonomy. Coercion, in terms of recommender systems, refers to the practice of limiting the actions that users may perform or the content that they may experience [1]. An example of coercion in a recommender system may be seen when a service requires users to complete a survey in order to view certain content.

It should be noted that though the terms “deception” and “coercion” possess clearly negative connotations, these types of interaction may be used in relatively harmless ways. Clickbait and mandatory surveys or promotional advertisements are not necessarily evil, though they are usually at least frustrating. Even so, these interaction methods have the potential to be unethical, which is why recommender system designers must tread carefully. Designers have the ability to create systems that deceptively recommend harmful items, such as racist content or malware. Part of the solution, therefore, lies in the hands of content moderators to ensure that otherwise ethical recommender systems and their users do not fall prey to unethical content. More importantly, though, designers must not create systems that purposefully deceive or coerce users. Deception and coercion seem to be most heavily applied during the prediction phase, so it may be possible to even retroactively remove these interactions from existing recommender systems.

Trading and Nudging

The other two interaction types discussed by Burr et al. [1], trading and nudging, seem more innocuous, and, in that way, are more insidious. According to the authors, trading refers to recommender systems that determine users’ goals, then suggest items to those users in a way that is ostensibly meant to benefit them but in reality disproportionately benefits the group implementing the system. Ethical issues with trading become apparent when, for example, a system must “decide” whether to recommend a user Song A, which is the best fit for that user, or Song B, which is not as good of a fit but promises greater revenue, through sponsorship or some other means, for the company implementing the system. Regardless of the song that is suggested, the user will be provided with some value; however, if Song B is selected, then the company will experience the greater advantage. With the inclusion of additional variables, such as user data collected or musician’s rights, beyond the user’s enjoyment and the company’s profit, the ethical questions about trading grow even more complex.

Similarly insidious to trading is nudging, which is an interaction type that leverages a user’s biases or the decision-making heuristics that that user may employ, in order to push that user in the direction of one recommendation over another [1]. For example, a recommender system may exploit a user’s recognition heuristic, which Gigerenzer and Gaissmaier describe as the human tendency to favor options that are familiar, or recognized, in some way over unrecognized alternatives [2, p. 460]. Such a system might present a list of recommended films to a user, displaying films that it “wants” the user to watch using familiar poster designs, while using strange and unfamiliar poster designs to display films that it does not “want” the user to view. In this case, the recommender system is not necessarily deceiving or coercing the user; it is not misrepresenting information or attempting to limit user behavior. Still, nudging impacts user autonomy by exploiting human psychology to achieve a desired exchange. Nudging, in some ways, is similar to advertisement in general. This interaction type becomes questionably ethical, though, when a recommender system nudges a user toward “an item that is sub-optimal for the user, but preferred by the [system],” as described by Burr et al. [1, p. 744].

Systems that utilize trading or nudging may be less clearly unethical than those that utilize deception or coercion, both for users and for designers themselves. While clickbait or mandatory surveys obviously impede the autonomy of users, even in small amounts, one may rationalize systems that trade or nudge. For example, one may argue that instances of trading derive equal amounts of value for every party involved. Similarly, one may argue that nudging is simply the practice of encouraging users to click on items that they are more likely to enjoy. I posit that these interactions become unethical once a group other than the user reaps a greater advantage from the interaction than does the user; however, this point is difficult to define, resulting in an ethically gray area. Nevertheless, designers must constantly ask themselves, during the design process, whether the recommender systems they are creating are disproportionately benefitting one party over another. To avoid unethical trading, designers should ensure that their systems do not unreasonably favor items that would unfairly benefit their company over their users in the learning phase. Nudging, too, seems relevant during the learning phase but also during the prediction phase. Designers must guarantee that the methods by which recommendations are presented do not exploit users’ psychology.

Promoting Addiction

In some cases, deception, coercion, trading, and nudging may be abused to promote addictive behavior in users. Burr et al. explain that though the topic of recommender systems’ influences on addiction is relatively uncharted, it is apparent that some tech companies attempt to attack user behavior and get users “hooked,” using psychological reward systems similar to those seen in gambling addictions [1]. Anthropologist Nick Seaver likens this aspect of recommender systems to traps [5]. The similarity is apparent; just as traps exist to attract animals and ensure captivity, recommender systems exist to draw in users and ensure retention. Issues may arise when targeted groups become truly addicted to services offered by recommender systems, such that their daily lives are impacted. Again, designers must consider the gravity of their designs. User acquisition and retention are undoubtedly key factors in a company’s success; however, at what point is a company’s success outweighed by its deleterious effects on users? Thus, it is important for designers to avoid implementing features that are meant to form addictions in their users. Here, too, there exists a fine and ambiguous line. It may not always be clear where acceptable advertising practices end and harmful, addictive features begin. Perhaps the best part of the recommendation process to address, in this case, is the information collection phase. Based on data collected about users and the feedback they provide, recommender systems should be able to determine whether a user is demonstrating addictive use of that system or service. At this point, designers may invoke any number of features meant to reduce addictive behavior, to be determined through user testing.

How to Approach Ethics

Panels from Calvin and Hobbes, by Bill Watterson (1993). Retrieved from http://utminers.utep.edu/mfernandez/visual%20analysis.html

Of course, “being ethical” is easier said than done. Simply relying on designers to follow a “Reasonable Person Principle” such as the one promoted by Carnegie-Mellon University’s School of Computer Science is likely not enough to ensure ethical designs. The truth is that viewing situations through an ethical framework is often easiest from a distance; designers intricately involved with the creation of a recommender system may possess too limited of a perspective. Furthermore, too few designers and engineers possess solid grasps of ethics. Thus, I argue that a two-pronged approach should be followed. First, designers and engineers should be better trained in ethics. Some universities today require that students in computer science enroll in ethics courses. This requirement should be made standard, and, further, ethical training should become more commonplace in tech companies. Second, ethical review boards should be instantiated to assess recommender systems that have the power to manipulate user autonomy. Ideally, these review boards would operate as third parties — outside of the companies developing recommender systems — so as to mitigate bias. With deep enough understanding of recommender systems, these review boards could even suggest the points in the recommendation process that uphold unethical approaches such as deception, coercion, trading, nudging, or addiction promotion.

Recommender systems are ubiquitous today, and grow even more widespread. With the ability to reach so many users and manipulate their autonomy, these systems have the potential to shape societies at large. Therefore, we, as designers, must strive to ensure that our designs stand by ethics and avoid impeding our users’ autonomy.

References

[1] Burr, C., Cristianini, N., & Ladyman, J. (2018). An Analysis of the Interaction Between Intelligent Software Agents and Human Users. Minds & Machines, 28(4), 735–774. Retrieved from https://link.springer.com/article/10.1007/s11023-018-9479-0

[2] Gigerenzer, G. & Gaissmaier, W. (2011). Heuristic Decision Making. Annual Review of Psychology, 62(1), 451–82. Retrieved from https://www.researchgate.net/publication/49653132_Heuristic_Decision_Making

[3] Isinkaye, F.O., Folajimi, Y.O., & Ojokoh, B.A. (2015). Recommendation systems: Principles, methods and evaluation. Egyptian Informatics Journal 16(3). Retrieved from https://www.sciencedirect.com/science/article/pii/S1110866515000341

[4] Milano, S., Taddeo, M., & Floridi, L. (2019). Recommender Systems and their Ethical Challenges. Oxford Internet Institute. Retrieved from https://www.researchgate.net/publication/332672253_Recommender_Systems_and_their_Ethical_Challenges

[5] Seaver, N. (2018). Captivating algorithms: Recommender systems as traps. Journal of Material Culture. Retrieved from https://static1.squarespace.com/static/55eb004ee4b0518639d59d9b/t/5b707506352f5356c8d6e7d2/1534096646595/seaver-captivating-algorithms.pdf

--

--

Nicholas Simons

Current MHCI student at CMU, former CS & Psych student at Bucknell, future dog owner?