Tackling Toxicity: Getting a Grip on the Problem

Peter Alau
Spirit AI
Published in
3 min readJul 8, 2018

Tim Berners-Lee, inventor of the World Wide Web, considers online toxicity to be one of two main challenges the internet faces today. Doxxing and trolling used to be word that could only be found in the deeper subcultures of online spaces, now they are used on the evening news in common vernacular.

Abuse and toxicity range from the everyday onslaught of anonymous attacks on social media, to stalking, brigading, and threatening of some unlucky individuals. More extreme abuse has resulted in fatalities that began with mean-spirited online pranks.

For the last two decades, researchers have studied the various anti-social uses of the internet to determine where the behaviors come from, and how they can be curbed. Despite this, there is no singular agreed term for the range of malignant online behaviors under scrutiny. The top three terms are “cyberbullying” (44,000 results), “online harassment” (7,350 results), and “online abuse” (2590 results). Though “Cyberbullying” is the most prevalent term, it is restricted to anti-social behavior that is intentional and repeated. The terms “online harassment” and “online abuse” carry similar connotations. Instead, we will use the term “toxicity” to refer to all anti-social behavior online, regardless of intent or frequency. It is derived from the common use of the word “toxic” to refer to people who act “extremely harsh, malicious, or harmful”.

We have been studying online abuse long enough, it’s time to reduce it. We (at Spirit AI) will be publishing a taxonomy of online abuse in an attempt to create a common language for anyone looking at the problem to use as a starting point.

Diving in:

So how do we analyze toxic behavior online? And why would we bother? First, to analyze such a complex collection of behaviors as toxicity, a framework or taxonomy needs to exist that can be used to deconstruct it. Once you have deconstructed a particular behavior into its components, you can more easily and systematically research, discuss, and tackle it. Let us look at an example to illustrate how a taxonomy can help us deal with toxicity better.

Imagine you encounter an online community, and find major issues with both single instance hate speech and coordinated, negative forum/chat raids. You want to develop a solution to tackle both. A hate speech solution might include some extensive profanity filtering, or a human moderator reading through comments before they are posted. Would this solution also deter raids? Probably not. Though raids can make use of hate speech to reach their effect, they can also employ other behaviors (e.g. down voting or reporting) and additionally have the force of numbers and coordination on their side. A profanity filter can take the edge off some forms of raiding but will be circumvented quickly through the coordination in the group (finding work arounds and spreading them across the group), while a human moderator would not be able to keep up with the volume of content generated by such a coordinated, group attack. So while text-based raiding in essence reaches its effect by employing hate speech, we can see through deconstruction of the two phenomena that solutions that might be reasonably effective for hate speech are unlikely to deter raids. In this way understanding the individual components of the behavior helps us build better solutions and interventions.

Though a taxonomy of online toxicity would be very valuable, there is currently no comprehensive taxonomy of online toxicity (anti-social behavior) known to the author(s). There are taxonomies of cyberbullying (Cyberbullying.org for an overview), descriptive and explanatory works on toxicity in specific communities (e.g. League of Legends has a wealth of publications on the topic such as this one), legally grounded analyses of cyberbullying, as well as collection works on online harassment. In an attempt to bridge this gap, our next article will introduce D-TOX: A proposal for a comprehensive taxonomy of online toxicity. In follow up articles, we will connect the D-TOX model to common forms of toxicity to show how they can be deconstructed, and where promising paths for interventions may lie.

--

--

Peter Alau
Spirit AI

Peter helps companies reduce online harassment and toxicity as the Director of Business Development at Spirit AI.