Do Users Want Platform Moderation or Individual Control? Examining the Role of Third-Person Effects and Free Speech Support in Shaping Moderation Preferences
This blog post summarizes a survey study that compares users’ preferences for platform-directed top-down moderation v/s personally configurable moderation of content on social media sites. This paper has been published in the journal New Media & Society.
Today, social media companies like Facebook, Twitter, and YouTube have become the new governors of digital expression. Concurrently, individuals using these sites can also give their input into governance in several ways, such as flagging an account, serving as a volunteer moderator, or downvoting a post. Thus, we are moving toward a pluralist model of speech regulation in which speech must be regulated in a multi-stakeholder fashion — legislative entities enforce online speech laws, platform operators configure governance regimes of acceptable content, and users themselves intervene against content perceived as problematic.
We focus in this article on platforms’ offering of personal moderation tools that let end-users configure content moderation of the posts they see to align with their content preferences. We are primarily concerned with tools currently offered by platforms such as Instagram and Twitter that let users specify their sensitivity to specific topical categories, such as sexually explicit content and hate speech (see Figure 1).
Configuring such tools lets users align the moderation system with their tastes and thresholds. Prior research has documented that users often find platform-enacted moderation decisions unfair, opaque, and intractable. Personal moderation tools, therefore, present an appealing alternative. However, we do not know the situations in which users would prefer to have a choice in shaping moderation versus when they would instead prefer that platforms manage it for all users — and the different factors that shape these preferences.
Informed by the third-person effects (TPE) hypothesis, we fill this gap by examining users’ preferences in the context of three norm-violating speech categories previously studied in the literature: (1) hate speech, (2) sexually explicit content, and (3) violent content. Prior research has shown that perceptions of the effects of media messages on others predict content regulation attitudes. We examine the role that TPE plays in shaping user attitudes about deploying platform-enacted versus personal moderation tools.
We also connect our findings to the scholarship on understanding public attitudes toward freedom of expression and its consequences. The introduction of personal moderation tools complicates the questions of upholding free speech principles. Users may use these tools to avoid specific content categories while others continue to see the same content, thereby preventing infringement of online expressions. On the other hand, personal moderation tools could be framed as a way for people to avoid viewpoints they dislike. While one could always ignore any content, personal moderation tools make removing broad swathes of content easy. Therefore, we analyze how users’ support for freedom of expression shapes their notion of different moderation approaches.
We conducted a nationally representative survey of 984 US adults to explore the influence of third-person effects and support for freedom of expression on users’ support for platform-enacted moderation and personal moderation. Our survey questionnaire contained three blocks with similar questions about hate speech, sexually explicit content, and violent posts (see Figure 2). Our analysis controlled for age, education, gender, race, political affiliation, and social media use of each respondent.
Our results show that a majority of participants at least somewhat agreed that platforms should ban hate speech, violent content, and sexually explicit content, respectively. Further, a majority of participants at least somewhat agreed that platforms should offer personal moderation tools to let end-users regulate hate speech, violent content, and sexually explicit content, respectively (see Figure 3).
Given a choice between platform-wide moderation and a personal moderation tool to regulate hate speech, violent content, and sexually explicit content, 52.4%, 52%, and 55.3% of participants, respectively, chose the personal moderation tool (Figure 4). This finding shows that more participants prefer autonomy in moderation over delegating it to platforms. It also suggests that social media users are primarily concerned with their individualized experiences rather than the broader societal implications associated with platforms’ fulfillment of ethical responsibilities.
Further, our linear regression analysis showed that the perceived effects on others (PME3) predicted participants’ support for both platform-wide and personal moderation. This theoretically significant finding helps advance TPE research by showing that perceived effects on others play an essential role in triggering censorial behavior. Given a choice between platform and personal moderation, greater PME3 predicted preference for platform moderation over personal moderation in each content category. This finding indicates that when users perceive the adverse effects of a content category on the public, they desire platforms to take site-wide actions on that content rather than regulate it for themselves. This suggests some public appetite for platforms to take on the responsibility to protect vulnerable others from content deemed to be egregious, even at the expense of personal moderation control.
We also found that support for free speech predicted support for using personal moderation to regulate each inappropriate speech category. This suggests that people may perceive personal moderation tools not as an infringement on the free speech of others but simply as according them greater personal agency to shape what they see. Further bolstering this interpretation is our finding that given a choice between platform and personal moderation, support for free speech predicts support for personal moderation in each content category. This highlights users’ approval of a shift toward a new approach to content curation that emphasizes individual choice rather than endorsing top-down censorship by platforms or other entities.
Overall, our results support a scheme where both platform and personal moderation are available and robust. Platforms could reduce their site-wide bans of borderline content (e.g., content that does not generate high PME3) and instead empower users with personal moderation tools that allow specifying their moderation preferences for that content. Additionally, future research should determine best practices for designing and using personal moderation tools.
For more details, please check out the full text of our paper, preprint available here.