Is YouTube Keeping You in the Dark?

The Urgent Need for Autonomy and Transparency

Teddy Roberts
Design Ethics
7 min readJun 3, 2024

--

https://www.youtube.com/

Recommendation Algorithms

In our digital age, recommendation algorithms play an integral and, too often, invisible part of daily life. Whether we are watching videos on YouTube, swiping through posts on Instagram, shopping on Amazon, or making searches on Google, these algorithms shape the information, ideas, content, goods, and people we encounter. At their core, these systems predict and prioritize results, ostensibly to enhance user experience and value based on our preferences. While recommendation systems have proved powerful, they raise numerous ethical questions related to digital safety, privacy, and fairness.

To enhance YouTube’s algorithm ethically, it’s crucial to increase transparency and user autonomy. Initially, it’s important to understand the ethical complexities of YouTube’s content recommendations. However, empowering users with more autonomy over their digital experiences and offering clear, detailed explanations of the recommendation process can better address these ethical concerns. By implementing these changes, YouTube can respect the dignity, rights, and moral agency of users, thus fostering a more pluralistic and free society.

The Ethical Challenges of Content Recommendation

First, navigating the ethical landscape of content recommendation is a complex task that involves balancing multiple, often conflicting, interests. To understand this moral complexity, this section will elaborate on three key challenges: dealing with inappropriate content, privacy issues, and fairness. Each presents ethical dilemmas that require balancing and make any single rule-based approach difficult.

Dealing with Inappropriate Content

Inappropriate content is a perennial problem on YouTube. While YouTube has an ethical responsibility to prevent people from spreading and accessing clearly harmful and illegal content, it faces the ethical dilemma of determining what content should be deemed inappropriate while still respecting the rights of its users and creators.

Though YouTube does an effective job of removing content clearly against their terms of service, the risk of unethical content recommendation on the platform is still high. Cristopher Burr, in his book Can Machines Read our Minds, found that recommendation systems significantly impact individual autonomy by subtly and overtly guiding user choices. This guidance can range from harmless filtering of content to outright attempts to manipulate the user. On YouTube, Mozilla’s 10-month crowdsourced investigation into the algorithm found that 43.6% of “YouTube regrets” (videos users had wished they had not seen) came from recommendations unrelated to the previous videos the volunteer had watched, yet gained 70% more views/day. These findings suggest that YouTube’s algorithm often oversteps its bounds, attempting to influence users’ autonomy by recommending content that they might not want to see.

https://foundation.mozilla.org/en/blog/mozilla-investigation-youtube-algorithm-recommends-videos-that-violate-the-platforms-very-own-policies/

In general, I agree with YouTube’s decision to automate content moderation as it is an effective way to remove obviously harmful content from their website as quickly as possible. However, the question of borderline content, as highlighted in Mozilla’s findings, is currently inadequately addressed on the platform.

Privacy Issues

Next, another ethical balancing act that the YouTube algorithm has is respecting and protecting user privacy while using their data to personalize recommendations. While there is a growing body of law to protect the privacy of people, such as the General Data Protection Regulation (GDPR), passed by the European Union in 2018, YouTube continues to gather and use immense amounts of personal information to drive engagement and advertising revenue. Based on the inherent nature of recommendation systems, which rely on user profiles for personalized recommendations, YouTube may argue that its users consent to this use to make the service possible. However, even if you accept this argument, YouTube must be more transparent as many users do not understand the extent to which Google can make sensitive inferences about them. For YouTube’s use of personal data to be ethically justified, it must be based on informed consent.

Fairness

Finally, fairness in algorithmic decision-making is ethically complex because of the problems in power dynamics of who decides what fairness means and how this is enforced. On YouTube, fairness is especially relevant to content creators who seek equal treatment by the algorithm and can have their income impacted by biases in the system. Even for users, though, the effects of unfairness in algorithms can be severe, particularly if they perpetuate social biases and lead to echo chambers.

In the 2016 paper, On the (Im)possibility of Fairnes, Friedler et al. use a mathematical setting to evaluate different fairness algorithms by classifying the process into three areas: inputs, or the information that the system observes; output, or the decision that the system makes; and importantly construct space, or the space that captures “unobservable, but meaningful variables for the prediction.” Essentially, the study’s conclusion was that there is no objective fairness and that for a company to be transparent about how fairness works in its algorithm it needs to share not just the inputs and outputs of the algorithm, but also this “construct space” that includes the built-in assumptions that the algorithm is making. Thus, the only way that the YouTube algorithm can truly be fair is through transparency with its users.

Autonomy and Transparency: The Road Forward

Because of the complexity and many stakeholders involved in the issues I discussed in the last section, I believe that the best practice for YouTube is to take a rights-based approach to content recommendation that prioritizes user autonomy through configurable personalization settings and expands transparency in those areas where YouTube limits access to content. In doing so, YouTube can enhance user autonomy and trust in the platform. Further, by empowering users, YouTube does not need to navigate all of the ethical dilemmas through centralized decisions, but instead gives each user more power to decide what they deem appropriate for themselves. In a pluralistic and liberal society, this allows users to answer the ethical questions of inappropriate content, privacy, and fairness for themselves, and transparency shows them why this is important.

When dealing with borderline inappropriate content, maximizing user autonomy is crucial. In general, thinkers agree that the best way to address this issue is by maximizing user autonomy, and the main debate is over the most effective way to do so. Tang and Winoto, in their 2016 paper I Should Not Recommend it to You Even if You Will Like It: The Ethics of Recommender Systems, argue for a dynamic system in which users can personalize their individual ethical filters. This ethical filter will then be applied as a second layer to the content recommendation system, thereby allowing users to have more control over the content they see. This method underscores the importance of maximizing user autonomy in managing inappropriate content.

Regarding data privacy, giving users autonomy is part of the solution, but transparency also plays an important role. To a certain extent, for YouTube’s algorithm to work effectively, the algorithm must collect data and thus privacy concerns are difficult to address head-on. In his work Towards an Ethical Recommendation Framework, Paraschakis accepts this fact and approaches the problem with what he coins an “ethical toolbox” that he believes should be implemented into recommendation systems. This toolbox remains user-centered by offering the choice of whether users would like their data collected, but is transparent with them that the algorithm needs data to recommend content they would like to see.

High-level view of Paraschakis’s personalized recommender system

To some extent, something like this exists on YouTube already with the clear history option, but Paraschakis argues for a much more comprehensive framework that gives users more control over how their data is used, as you can see in the diagram above. In this way, by focusing on maximizing the autonomy of users through implementing more transparent options surrounding data collection, YouTube can address the broader ethical challenge of privacy on their user data-driven platform.

Finally, the best solution to balancing the complex ethical questions of fairness on YouTube is again user-centric measures grounded in autonomy and transparency. As Friedler et al. found in On the (Im)possibility of Fairness, there is no way to design a recommendation system that has no biases. As such, the only way for the YouTube algorithm to be fair is for the company to be transparent about the biases built into the system. Here, the challenge is finding the best way to make the algorithm more transparent. Here, Yao and Huang (2017) propose metrics to measure disparities in recommendations across different user groups using a probabilistic programming method to reduce bias against socially protected groups. In general, the YouTube algorithm can be made more ethical by improving its transparency, thereby respecting its users’ autonomy to recognize the inherent biases within the system and decide how they would like to act within it.

Empowering Users with Autonomy and Transparency

While it is challenging to satisfy all sides of the ethical debate surrounding YouTube’s recommendation system, the best practice in each case is to prioritize user personalization and transparency. By empowering users with more control over their digital experiences and providing clear, detailed explanations of the recommendation process, YouTube can enhance user trust, improve overall satisfaction, and foster a more ethical digital environment.

--

--