Online Harassment and Content Moderation: The Case of Twitter Blocklists

Shagun Jhaver
ACM CSCW
Published in
4 min readOct 24, 2018

This blog post summarizes a paper about online harassment and content moderation on the social media website Twitter that will be presented at the 21st ACM Conference on Computer-Supported Cooperative Work and Social Computing.

With harassment growing on many online platforms, what tools do users have to protect themselves? Are those tools effective? Do tools designed to protect users ever block too much content, or block unfairly? In this study, we interviewed people who use Twitter blocklists, a mechanism developed by third-party volunteer users on Twitter aimed at addressing the problem of online harassment. We also interviewed people blocked by Twitter blocklists. Are they really harassers? What kind of content is being blocked?

We focused our study on Good Game AutoBlocker(or GGAB), one of the most popular blocklists on Twitter. We conducted semi-structured interviews with 14 users who subscribe to GGAB and a separate group of 14 users who were blocked on GGAB. We engaged with this latter group because perspectives of people accused of online harassment are often omitted from discussions of this topic.

Most of the users who subscribed to GGAB described suffering harassment on Twitter and other social media websites. One user discussed how she had to start taking anti-depressants in order to cope with online abuse. We curated a list of behavioral patterns and tactics that our participants identified as manifestations of online harassment. These tactics include revealing a victim’s private information online (doxing), a group of offenders posting abusive messages to a single individual to intimidate her (dogpiling) and offenders providing false impression of their own gender or race (identity deception).

Although Twitter allows users to block any individual account that a user finds undesirable, this process can become tedious if a user needs to block multiple accounts, for example, in cases where a mob of accounts posts offensive comments against a single user. Twitter blocklists address this issue by allowing users to pre-emptively block with a few clicks all accounts on a community-curated or algorithmically generated list of block-worthy accounts. GGAB, for example, allows users to block 8,517 accounts with a single subscription in just a few minutes.

Our participants told us that their Twitter experience improved dramatically after they began using GGAB blocklist. They stopped getting unwanted notifications and were better able to control the content they receive. Some interviewees said that minority groups such as transgender communities especially benefit by the use of anti-abuse blocklists because such groups suffer some of the worst harassment on Twitter. A few participants were surprised to find accounts that were mistakenly blocked for them because of their use of GGAB but they manually unblocked such accounts. These participants felt that the benefits of using blocklists were worth the cost of having a few false positive accounts blocked for them.

A majority of our participants who were blocked on GGAB told us that they were surprised to find themselves on the blocklist because they did not feel that any of their actions warranted being blocked. Some were blocked merely for following controversial users — not for posting any controversial content themselves. They worried about the biases of moderators who curate these blocklists. A few of these users complained that they suffered professionally because of being blocked. For example, a game designer told us that an association for game developers discriminated against him because his account was on GGAB list.

We also discovered the problem of blocking contagion — when a popular blocklist is forked to create multiple other lists, false positive accounts on the original blocklist end up getting blocked by users who subscribe to any of the several forked lists. This results in a large number of users inadvertently censoring these false positive accounts.

We build on these findings to suggest designs that can help address the problem of online harassment while ensuring that users are not blocked unnecessarily or unfairly. We recommend avoiding blocking contagion by tracking meta-data about each blocked account (when were they blocked, by whom, and why?) and making some blocks initially temporary rather than permanent. We found that specific oppressed groups have unique needs, and ideally custom solutions can be developed for them. Most importantly, we recommend that designers focus not just on creating blocking solutions but also understanding mechanisms that allow users with differing ideologies to interact without fear of being abused.

For more details about our methods, findings and design suggestions, please check out our full paper that was published in ACM Transactions of Computer-Human Interaction (TOCHI) and was featured in the Editor’s Spotlight. For questions and comments about the work, please drop an email to Shagun Jhaver at sjhaver3 [at] gatech [dot] edu. Citation:

Shagun Jhaver, Sucheta Ghoshal, Amy Bruckman, and Eric Gilbert. 2018. Online Harassment and Content Moderation: The Case of Blocklists. ACM Trans. Comput.-Hum. Interact. 25, 2, Article 12 (March 2018), 33 pages. DOI: https://doi.org/10.1145/3185593

--

--