Perceiving The Fairness of Creator Moderation on YouTube

Renkai Ma
ACM CSCW
Published in
3 min readOct 3, 2022

This post summarizes our CSCW 2022 paper:
► Content creators like YouTubers not just experience content moderation that regulates their content but also creator moderation that uses multiple governance mechanisms to moderate their visibility, online identities, audience engagement, revenue, and more.
► Creators develop fairness perceptions from their interactions with such creator moderation mechanisms.

Problem Statement

As content creators create, share, and monetize their content from platforms, media outlets have frequently reported that creators are not treated fairly by content moderation and algorithms. “YouTube moderation bots punish videos tagged as gay or lesbian,” and YouTubers tried to prove the platform systematically demonetizes queer content (i.e., decreasing or removing advertising income), as reported in 2019. When platforms think creators violate content policies, creators will face challenges to their income, fanbase connections, and livelihoods.

Here we are looking at a unique context where creators interact with moderation, where because creators derive a livelihood from platforms, they want to work with moderation systems.

However, moderation decisions could be confusing, complex, and unpredictable, which render their creative labor precarious. We ground our investigation in this context to understand how YouTubers perceive the fairness of moderation.

Findings

After interviewing 21 YouTubers who had experienced moderation on YouTube, we found:

  1. (In)Equality in Comparing Moderation Treatment: Our participants actively compared their moderation treatments with others to assess moderation fairness. our participants, either those who have larger or smaller fanbases thought differently and claimed that they could observe unequal moderation actions happening to each other.
  2. (In)Consistency within Algorithmic Moderation Decisions: YouTubers’ fairness perception hinges on consistency across moderation decisions and policies. However, our participants perceived the unfairness when they observed that the moderation system made inconsistent decisions, explanations, resource distribution decisions, or ones inconsistent with content policies.
  3. (Lack of)Voice in Algorithmic Visibility Decisions: Procedural justice assumes that voice in decision-making processes enhances people’s perceived fairness and is more likely to produce an equitable outcome. Our participants wanted their voice to be involved either in moderation-decision making or after the issuance of moderation decisions. But when participants realized their voice or input was not heard, moderation decisions and associated algorithms had been already taking action, triggering ripple negative effects on their content, performance metrics, fanbase, and more. So, the perceived unfairness arose.
Inequality of moderation treatments, inconsistency within the algorithmic decisions, and the lack of voice in algorithmic decision-making processes triggered perceived unfairness of (creator) moderation on YouTube.

Design Considerations

Since creators might receive different moderation decisions, they would observe different ripple effects from these decisions. So, there can be different qualities of moderation experiences and sequentially different situated reasons for fairness perceptions of moderation.

To empower creators and better understand their situated moderation experiences, we detailed several design considerations for creator platforms like YouTube.

  • First, creator platforms could consider disclosing moderation decisions that are hard to find such as whether certain video is invisible under the restricted mode filter or not to creators.
  • Second, platforms should inform creators of how moderation decisions affect their income, visibility, audience engagement, and other performance metrics.
  • Last, we argue that creators’ voice should be valued by algorithmic decision-making processes by proactively informing the predicted decisions creators.

💡💡💡✨ Please also be sure to check out our poster in CSCW 2022 discussing how algorithmic bureaucracy is intertwined with organizational bureaucracy in creator moderation.

Citation format:

Renkai Ma and Yubo Kou. 2022. “I’m not sure what difference is between their content and mine, other than the person itself”: A Study of Fairness Perception of Content Moderation on YouTube. In PACM on Human Computer Interaction, Vol. 6, CSCW, Article 425, November 2022. ACM, New York, NY, USA. 28 pages. https://doi.org/10.1145/3555150

--

--

Renkai Ma
ACM CSCW
0 Followers
Writer for

A HCI researcher focusing on content creators and governance. https://www.renkaima.xyz/