Agent, Gatekeeper, Drug Dealer: How Content Creators Craft Algorithmic Personas
Illustrated by Esther Jan.
This post summarizes a research paper “Agent, Gatekeeper, Drug Dealer: How Content Creators Craft Algorithmic Personas”, authored by Eva Yiwei Wu and Emily Pedersen, Niloufar Salehi from University of California, Berkeley. The paper will be presented at the ACM Conference on Computer-Supported Cooperative Work (CSCW) on November 11, 2019.
When the YouTube algorithm promotes a video, what role is it taking on in the social world? An influencer? A talent agent? An employer?
This question is central to understanding human-AI relationships, power dynamics, and potential accountability mechanisms. One group with a unique vantage point into this question are YouTube content creators (YouTubers). Over the past decade, YouTubers have created online and in-person communities that they use to share tips and engage in mutual aid and collective sense-making around understanding and making use of the platform. This collective sense-making is challenging because YouTubers do not have direct access to the technical aspects of the algorithm. At the same time, they do interact directly and intimately with the algorithm and its socio-technical infrastructure on the ground and have high stakes involved. We sought to learn YouTubers’ understandings, priorities, and wishes as they relate to the algorithm.
We found that YouTubers assign human characteristics to the algorithm to explain its behavior; what we have termed algorithmic personas. We identify three main algorithmic personas on YouTube: Agent, Gatekeeper, and Drug Dealer.
An Agent manages and helps the creator in their work by finding an audience for them and promoting them. A Gatekeeper stands between the creator and viewers and decides who gets through. A Drug Dealer has one (often nefarious) goal: keeping viewers hooked on the platform for as long as possible. YouTubers use these personas to make sense of actions that the algorithm takes and to orient themselves towards the algorithm or sometimes to work against it.
Algorithmic personas is a conceptual framework that is closely related to prior research on social media folk theories and algorithmic imaginaries. Algorithmic personas focus on the roles that algorithms take on in the social world and rely heavily on existing, familiar roles as guidance.
In this work we focus on how YouTubers craft algorithmic personas and how they use that understanding to devise strategies to work-with or work-against the algorithm.
For instance, Agent promotes the content creator and procures employment for them. Some creators complained about having to transform their work style to fit into the Agent’s tastes and expectations:
“The algorithm forces you to constantly produce content. So you can’t be like I’m going to do a short film and take a break for like a month and a half because short films take time. You can’t do that. You are going to lose hundreds of thousands of followers and you are not going to make money.”
Gatekeeper stands between the creator and viewers and decides which videos can get views. One of our interviewees invoked this persona:
“If you just walk into it and are naive about it and just want to share your thoughts on this topic because you want to, then I don’t think it’s going to reach a broad audience because there is [an] algorithm between you and the viewers. You need to try to understand the algorithm and play to its strengths, or kinda get really lucky.”
Drug Dealer aims to keep viewers hooked on the platform for as long as possible.
“I’d just like a venue where being mildly interested in games doesn’t have an algorithm go ‘If you liked that, you’ll love this fascist rant!’ ”
In our interviews, and particularly using our design prompts, we engaged with YouTubers around how they would want the algorithm to change. Again, we found YouTubers invoking algorithmic personas as a way to describe their preferred roles for the algorithm: Impartial Judge, Custodian, Diversifier, Educator, Advocator, and Revenue Sharer.
We engaged with YouTubers in one-on-one interviews, performed content analysis on YouTube videos that discuss the algorithm, and conducted a wiki survey on YouTuber online groups. This triangulation of methodologies afforded us a rich understanding of content creators’ understandings, priorities, and wishes as they relate to the algorithm.
We began our study by conducting interviews with local hobbyist YouTubers. This enabled us to establish a basic understanding of their attitudes toward the algorithm and to deepen that understanding by asking questions and probing through card sorting and speculative design exercises. We created prototypes of alternative YouTube front pages and recommendation tabs that addressed matters our interviewees had discussed (see Figure 1 for an example). We continuously adapted these prototypes and made new ones based on our interviews. The goal was not for us to create the best possible YouTube algorithm, but to use design as provocation to elicit reactions from our interviewees. We used our designs to prompt participants to imagine a different YouTube and learn how they form understandings of how a new algorithm operates and affects them.
We analyzed YouTube videos of YouTubers speaking about their understanding of the algorithm. We also analyzed information available online about VidCon, the major convention for YouTubers and YouTuber forums and subreddits. At the end of our study, we sought to validate our findings with a larger group and used a wiki survey that we distributed on YouTuber forums and subReddits.
Figure 1. An example of an alternative design we used in our interviews. Each design was a provocation into a different way that the algorithm could work, showcased by what videos it would recommend under those conditions. In this example, viewers are only recommended “smart” or educational videos. We created “Smart Mode” as a provocation in response to interviewees’ complaints about the “dumb” content that the algorithm promotes.
Design and Policy Implications
As we face new challenges around the ethics and politics of algorithmic platforms such as YouTube, algorithmic personas describe roles that are familiar and can help develop our understanding of algorithmic power relations and potential accountability mechanisms. For instance, a drug dealer is viewed in society as potentially harmful because of the addictive nature of drugs and the public harms associated. From this perspective, there is precedent for policy in favor of public health. In the disturbing live streamed video that has since been removed from the internet, the Christchurch mosque shooter invited viewers to subscribe to the most subscribed individual on YouTube. His attack was engineered for virality, and meant to feed the extremist content on YouTube and the internet. YouTube’s recommendation algorithm has also put minors at risk, recommending nightmarish knock-off versions of popular kid-friendly content, and enabling child predators to communicate with each other via the comments section.
Additionally, from the talent agents’ perspective, because of the power that agents have over their clients, they often have legally binding contracts. Can you sign a contract with an algorithm? As we wrote this paper six YouTube stars sued YouTube for what they viewed as an unfair algorithm that discriminates against LGBT YouTubers. We are beginning to see algorithms show up in court.
Through the lens of the algorithmic personas crafted by YouTube content creators, we can enrich our understanding of algorithms and their impact in the real world. Algorithmic personas, invoking human characteristics in code-base algorithmic artifacts, enable designers and policy makers to design for human-to-computer systems with human-to-human relations as guides.
Full citation: Eva Yiwei Wu, Emily Pedersen, and Niloufar Salehi. 2019. Agent, Gatekeeper, Drug Dealer: How Content Creators Craft Algorithmic Personas. Proc. ACM Hum.-Comput. Interact. 3, CSCW, Article 219 (November 2019), 27 pages. https://doi.org/10.1145/3359321