Handling Toxic Behaviors on Social Media: Abusive Content and Trolling

Adrien Scorniciel
Hey Network
Published in
4 min readOct 15, 2018

With 7500 individuals dedicating their career to going through reported content, Facebook looks like it wants to take the issue seriously. Indeed, hiring an international army of reviewers to moderate violating content on a daily basis may seem like a sensible idea, but, as we learned again recently (here and there), it can leave your workforce in a distressing state.

Although helped by artificial intelligence, the current moderation process is unfortunately not enough to stop forbidden material from being published on the social media, and some content are bound to slip through. So next question is: does it make sense to allocate so much human resources to a system that is far from being as effective as it should be. Well, you guessed it, the answer is probably no.

One of the current system’s main problems is the lack of transparency. Companies are quite opaque regarding moderation processes and training, imposing Non-Disclosure Agreements to most of their employees, while there exists no independent third-party monitor to oversee them. Recent events gradually brought public attention to the issue, and efforts were made. The real problem here, however, is that most measures only apply to employees or, at best, to US contractors. Facebook employs contractors all over the world, meaning it is not up to the company to provide psychological support, or better inform them about the tasks they’ll be given.

In parallel, there exists groups trying to give a framework for handling abusive content, such as the “Technology Coalition”, which strives to combat pedopornography online. The Coalition counts Adobe, Apple, Dropbox, Facebook, GoDaddy, Google, Kik, Microsoft, Oath, PayPal, Snapchat and Twitter among their members. But the issue here is that the “Guidebook” produced by the Coalition tackles abusive content through the prism of child abuse only, leaving a lot of other infringing material to the discretion of the contractor or employee. What is effectively lacking is an overseeing organization tackling the issue from a broader perspective, while ensuring appropriate training and informing — to all employees AND contractors — before actually enrolling in the job.

Another Internet phenomenon that proves to be hard to handle are trolls. As a matter of fact, it is very hard to solve a situation if your counterpart is not actually looking for a solution. Motivations may range from empowerment and search for attention to financial and political interests. The line is not always so easy to draw: the grey area between stormy debate and actual trolling can be very hard to define. It is also relatively easy to misinterpret someone’s words when you can’t see their facial expressions and body language.

Online Disinhibition Effect, a phenomenon presented in 2004 by American psychologist John Suler tries to explain why we behave online in a way that we would never dare in real life. Possible factors include anonymity, invisibility, asynchronous communication, empathy deficit and, of course, individual traits.

Obviously, this doesn’t mean that all trolls are bad people. Some of them even engage in “accidental trolling”, offending others without realizing it (or too late). But the result remains the same: provoking messages sparking emotional response.

On the Internet, the saying goes: “Don’t feed the trolls.” Meaning: don’t answer or, at the very least, don’t give them a reaction they can use. You can try to simply continue the conversation with other participants without acknowledging the troll. In your professional life, you may find yourself in a situation where ignoring him or her may not be possible. In that case, a short, polite answer can sometimes defuse the situation. At the very least, it will allow you to see whether the person you’re communicating with is actually looking for a solution, or is just trying to be a pain in the neck.

Another solution might be to answer with humor. Even though this one is much trickier to use — humor is far from being an exact science, it can turn a bad comment or review (even a troll’s) into an online buzz. But there are at least as many examples of witty answers attracting attention, as there are bad reactions causing bad reputation, so remember to use it with caution.

Our take on the issue

At Hey, we are striving to solve the problem both from a humanistic and an economic perspective.

First, we automatically rank the best contributions (depending on the number of likes), so you’re less likely to stumble upon irrelevant or inappropriate content.

Next, we developed a compound system, with AI detection as the first stage. This is very efficient for detecting bot behaviors, such as “like farms” (which would affect our ranking). This is the Hey Anti-Cheat (HAC) system. It will flag any suspect behavior, which will then be reviewed by Hey’s internal team.

In addition to automatic AI flagging, Hey allows for community curation and ruling, through reporting and the Hey Troll Court (HTC). The HTC aims at offering community ruling of toxic behaviors, and includes the Overwatch program. This program allows for anyone interested to join and receive tokens in exchange for good ruling decision. Users will not be able to see the “toxic” individual’s name or picture, only the messages that were reported, guaranteeing the neutrality of the whole decisional process.

Finally, for difficult cases where no consensus can be reached through our community, the case is forwarded to Hey’s internal team for final decision. Depending on the gravity and the repetition of the violating behavior, users can be banned temporarily or indefinitely.

In a nutshell, our automatic detection and community curation processes ensure a smooth user experience, while rewarding contributors for helpful comments, but also for the time they dedicate to curating content on the platform.

Want to learn more? Head over to our website and read our manifesto.

--

--