Why some people are so toxic online?

Tal Cherni
behaviouralarchives
6 min readJun 5, 2021

Browsing through the comments section in Twitter, Facebook, Instagram and other social media outlets can often feel like the people who choose to express themselves in such spaces are just plain rude, to say the least. Be it in political discussions (1,2,3) , discussions about science and more, the social web is riddled with comments and conversations that could end up truly hurting people’s feelings, or leave onlookers who read them with negative reactions.

This phenomenon is known as the online disinhibition effect (1,2), and it refers how people often behave differently online than in somewhat similar offline situations. For example, a person who is shy in real life could be quite a flirt in online situations, and someone who is calm and forbearing in reality could end up using aggressive language on online forums. There are several factors that constitute the online disinhibition effect. I will go through some of the factors behind this phenomenon, before pointing out possible consequences of such behaviour, and what could be done about it.

So why are people so mean?

Unsurprisingly, anonymity plays a large part in how people behave, both online and off. Whether it comes to decision making in moral dilemmas, cooperating with peers or just plain online behaviour, the level of anonymity affects how a person might behave. For example, recent research found that on average, the highest number of rude comments and personal insults can be found on YouTube, a website in which most users go by anonymous pseudonyms. The idea is that some people allow themselves to post toxic and outrageous comments online because they don’t feel like they could be held accountable for them. It is almost as if they are invisible, which in turn facilitates more impulsive behaviour.

Now, anonymity can go both ways. Not knowing who the person on the other side of the screen is could lead to dehumanization and reduced empathy. In everyday life, when talking about personal or emotional subjects, people might avert their eyes. Avoiding eye contact, and face to face visibility, disinhibits people, because they are not affected by the other persons’ facial and bodily expressions. For that reason, the absence of visible facial cues and nonverbal communication has a large effect on such impulsive online behaviours. Not being able to see the other person’s expression when calling them names might not feel as bad as when compared to saying it to their face.

Furthermore, according to Dr. Sherry Turkle, some people might actually feel as if they are talking to themselves when writing such comments (see article), a little like when they are cursing a driver that is cutting them off on the highway, not realizing that the other person is actually affected by their words. It’s also worth noting that tone of voice and bodily gestures have a strong impact on how we understand what someone else is saying, both of which are absent from online discourse.

An additional interesting factor is the absence of real time feedback (or asynchronicity). Comment-section discourses don’t happen in real time, and commenters can write lengthy monologues, which tend to entrench them in their extreme viewpoint. On the other hand, when actually conversing with a human being in real life, you’re talking back and forth, so eventually what you end up with is a conversation.

Other than plain human psychology, it is worth noting that social media outlets are actually designed to make us angry, because it leads to increased engagement (1,2). In Facebook for example, getting likes for comments is a form of social affirmation that you have said or done something ‘right’, which could lead to a rewarding feeling and a form of learning. When such likes are being given after a heated debate with a person who represents the opposing team, one might learn that lashing out is the way to go.

Lastly, it should also be pointed out that some people are actively adding fuel to the fire just for fun. Interestingly, it seems like these internet ‘Trolls’ are actually not that different in real life, and their online activity highly correlates with anti-social behaviour and the ‘Dark Tetrad’ of personality psychology: Sadism, Narcissism, Psychopathy and Machiavellianism.

So what if people are mean online?

Research about the effects of media violence on real life violence has been around for a long time. Whether it’s an interpersonal conflict about romance, gang rivalry or an actual assault on capitol hill, online threats and polarization could lead to actual violence in real life (1,2). It seems like social media isn’t just mirroring conflicts happening in schools and on streets — it’s intensifying and triggering new conflicts.

The lack of empathy and dehumanization that social media communication facilitates could also produce cyberbullying in schools, which apparently has the same emotional effect as regular bullying, and could end up causing the victim severe trauma or worst (1,2). Interestingly enough, simply reading negative comments on social media, even those that are not pointed directly you, could lead to depression and anxiety (1,2).

Another issue has to do with the fact that violence and aggression in the comment section could actually harm the authors’ credibility in particular (1,2), and the media outlet’s credibility in general (1,2). Combined with the fact that people tend to be entrenched in their own mindset and are quite reluctant to think otherwise, such aggression could further decrease trust in official sources of information.

Can anything be done about it?

The answer is that it’s tricky, as we can’t actively control other people’s behaviour, and we can’t always tell if the people who actually post toxic stuff are real or just bots/trolls. But several things can be done:

1. Active monitoring

When it comes to trolls, one way to go is to actively monitor your comment section for flame wars and nasty comments, and take measures such as deleting toxic posts, banning trolls from your blog or just exposing them for what they are. However, it is possible that such a tactic could only work in relatively small communities. So, what can be done on large debates over Facebook?

2. AI screening

Use of AI screening is an interesting way of tackling trolls and flamers who write uncivil posts. One approach is to detect and delete such posts, using a deep learning algorithm that can detect toxic posts of different kinds and flag them for further moderation, or even delete them altogether. A different approach is to nudge commenters as they write the post (1,2), by alerting them that what they are about to post is basically trash, and that they should reconsider. Such approach might not stop full fledged trolls, but it could prompt average people who simply get emotional to think twice about their wording.

3. Be nice

As previously mentioned, civil tone in online arguments could lead to better discussions. But in general, empathy is paramount when dealing with anyone, online or off. Being aware that some people just don’t think like us, have a different backstory, or just being mean for the sake of their own poor enjoyment, could perhaps help us to overcome overwhelming emotions of anger and just move on. Furthermore, there is always the chance that being civil could positively affect the other person’s behaviour as well.

4. Just avoid the comment section

If all else fails and mindfulness is not your thing, simply avoiding the comments section could also work. There is no point experiencing negative feelings if we could avoid them, and the debate is of no beneficial use for us.

Let’s be real, it’s not like we are going to miss a debate between Albert Einstein and Stephen Hawking, in an online comments section.

--

--