image: ochikosan — 123rf

Understanding the psychology of online abuse

Or let’s face it, there’s a troll lurking in all of us

Enrique Dans
Enrique Dans
Published in
5 min readAug 29, 2013

--

The Huffington Post’s CTO, John Pavley, has outlined some of the online newspaper’s ideas to end trolling on its threads, and that basically consist of users having to identify themselves, working with service and email providers to try to prevent those caught making abusive or threatening comments from signing up under another name.

The problem with this approach, as we pointed out a few days ago in a piece called “We have a right to remain anonymousis, in my opinion, a failure to understand the psychology of online abuse. The problem of abusive and threatening behavior online should not be underestimated: we are talking here about lives being turned upside down, about real harm and real victims, not simply a few uncalled for comments. At the same time, we need to be careful to resist the temptation to over-legislate and in so doing, create monsters.

In some cases, abusive behavior can be attributed to the Gyges effect, the online disinhibition effect prompted by the lack of feedback in the orbitofrontal cortex, as described by psychologist Daniel Goleman in a 2007 article in The New York Times called “Flame first, think later: new clues to e-mail misbehavior.”

Neuropsychology can certainly help explain the phenomenon, but only up to a point. Removing the shield of anonymity can help reduce trolling, but it also potentially throws the baby out with the bathwater: it is a de facto impoverishment of online discussion by eliminating positions that can only be expressed incognito, and could lead to exposing the very people it aims to protect to threats themselves. We need another approach to tackling the problem, one that accepts that anybody, however well-mannered, can metamorphose into a troll, or write something inappropriate under certain circumstances.

The question we have to ask ourselves here is whether trolls are born or made. In other words, once somebody has been identified as being behind abusive language, threats, or hate speech, is it really necessary to excommunicate them from the internet. Should the fact of having behaved like a troll at some point become something like a shame curriculum or a criminal record? I should confess here that on occasion I have fantasized with the idea of naming and shaming certain trolls, but the temptation evaporates on learning that the trolls concerned were pimply adolescents trying to impress their friends, or who might have benefitted from an occasional smack from their parents, or simply people who most of the time behave themselves and are usually well-mannered. If we can all admit to having wanted to behave like a troll at some point, then perhaps we should think more about controlling the content, and not the person.

There are subjects that bring out the worst in us. In the same way that a normally peaceful person can suddenly insult another simply because that person is dressed as a referee at a sports match, there are other areas; religion, politics, or even operating systems, where we are more likely to lose our rag. What’s more, we all know that violence begets violence: it is not the same thing to be the first to throw a stone as to respond to a lapidation, or to join in a battle royale where everybody is throwing stones. Experience tells us that most of the time it is not necessarily appropriate to label somebody for their behavior in a given moment, but instead try to prevent them from behaving like that in the first place.

I have come to the conclusion that a far better response, in my decade or so of experience managing a fairly popular web site—albeit not as popular as The Huffington Post—is to set up a system that puts the emphasis on content, not on people. Content must pass a filter, a default pre-moderation mechanism that combines algorithms with human input, and that can identify meaning: some of the most deliberately hurtful comments I have come up against over the years would not have been identified by any particular words. This pre-moderation default scheme is only bypassed when the contributor, voluntarily, decides to offer an element, name or pseudonym, that allows traceability, and displays a willingness to behave respectfully toward others, and in accordance with the rules of the site.

This limits abuse to isolated cases, because contributors who are known and respected on a site do not want to lose their status and be relegated to the naughty stair to think things over. This approach does not prevent imposing more stringent controls in cases where abuse lead to potentially criminal acts: insulting somebody is not the same as stalking or threatening rape or murder.

Such a system, combined with other social mechanisms that can help to identify bad behavior, does not remove the right to anonymity: it simply obliges contributors to undergo pre-moderation. If anonymous contributors wish to upgrade to a status whereby their comments are published automatically after clicking the “publish” button, they can choose a pseudonym, but which is only granted after passing a probationary period, and that is immediately removed if abuse is detected.

From an operative perspective, pre-moderation requires some effort right from the start of setting up a page, but all the evidence suggests that the work involved reduces as a contributor community develops: the incentive to abuse diminishes, the trolls go somewhere else, and self-regulation prevails. Like any other approach, it’s not perfect: some abuse will make its way through, but generally of a less serious nature, and that can be checked through human intervention. But most importantly, it avoids the pernicious effect of forcing people to identify themselves and thus losing their right to anonymity.

Almost 20 years on from its popularization, the protocols regulating manners on the web are still developing. On most sites, the same rules as would apply out in the real world are applicable; that said, few schools bother to teach pupils how to behave on line, and many parents simply have no idea about the subject: in fact, there are not as many “digital natives” as there are “digital orphans”, youngsters that learn their ways on the web without any real supervision.

Most people learn how to use the internet through trial and error, but over time, netiquette will be as natural to us all as are the most basic rules of behavior offline. In the meantime, let’s try to address problems by taking a more psychological approach, one that protects everybody’s rights, and does not endanger those it supposedly sets out to protect.

--

--

Enrique Dans
Enrique Dans

Professor of Innovation at IE Business School and blogger (in English here and in Spanish at enriquedans.com)