“This researcher programmed bots to fight racism on Twitter. It worked.”

Jess Brooks
On Race — isms
2 min readOct 2, 2017

“there may be limits to the effectiveness of top-down efforts by companies that run social-media platforms. In the short run, heavy-handed sanctions like account bans can actually embolden users who are censored. There is excellent evidence that this happens in China when the regime employs censorship.

A better option might be to empower users to improve their online communities through peer-to-peer sanctioning. To test this hypothesis, I used Twitter accounts I controlled (“bots,” although they aren’t acting autonomously) to send messages designed to remind harassers of the humanity of their victims and to reconsider the norms of online behavior… I sent every harasser the same message:

@[subject] Hey man, just remember that there are real people who are hurt when you harass them with that kind of language

I used a racial slur as the search term because I thought of it as the strongest evidence that a tweet might contain racist harassment. I restricted the sample to users who had a history of using offensive language, and I only included subjects who appeared to be a white man or who were anonymous…

Only one of the four types of bots caused a significant reduction in the subjects’ rate of tweeting slurs: the white bots with 500 followers… tweets from black bots with few followers (the type of bots that I thought would have a minimal effect) actually caused an increase in the use of racist slurs.”

--

--

Jess Brooks
On Race — isms

A collection blog of all the things I am reading and thinking about; OR, my attempt to answer my internal FAQs.