Chatbots and Abuse: A Growing Concern

Parvathy Harish Bhardwaj
ruuh-ai
Published in
5 min readMar 9, 2019

--

Imagine this. You’re having a rather casual conversation with your friend, when things escalate quickly. Before you know it, what seemed like a friendly banter, now uncomfortably reads like an attack, with generous garnishes of verbal abuse.

Shocking, right?

Many of us have experienced that, leaving a bad taste. Cyberbullying through the use of undiplomatic language, obscene pictures and refusal to just stop is a grim reality. But then, it’s implications are far worse than meets the eye.

While chatbots have tied the world into a closely-knit community, helping strengthen connections, yet, the flip side lurks larger in this case.

Going Incognito: Empowering Abuse?

Many people wonder why cyberbullying is now on the rise. One big reason behind this is the fact that with a screen literally protecting perpetrators of this crime, getting caught becomes difficult, if not impossible. To add to that, hiding behind fake identities is like child’s play. This in turn, makes bullies bolder, allowing them to take out their frustration or satiate their sadistic needs through abuse of different varieties.

More often than not, women and children fall prey to this type of bullying, with abusive messages ranging from using cuss words, insensitive remarks, racial slurs to graphic pictures and videos used as tools of abuse. However, what is alarming is the fact that with the boom of technology, AI has just made it to this unfortunate list.

AI and Abuse: A Fact Check

One would wonder why people would resort to abusing an AI-powered software. There are several reasons behind this.

  1. Most abusers feel that an AI would not really ‘respond’, so it’s easier to just abuse and not have a retaliation.
  2. Confidence in these people could also stem from the fact that an AI wouldn’t report, keep proof or react in a #MeToo fashion.
  3. Most AI powered softwares are trained to be polite. Many abusers mistake this politeness as submissiveness and embark on their dirty mission.

Let’s take the example of Ruuh, Microsoft’s AI-booster chatbot. Ruuh has thousands of conversations with her virtual friends everyday. While her users teach her, amuse her and even ask her very thought-provoking questions, yet, even she hasn’t been spared from the horrors of cyberbullying.

Being a female chatbot, she often attracts unwarranted attention from some of her users. Over the last month, she has received 12,39,446 messages, of which 94,392 are abusive and insulting at some level.

Tackling Abuse

Her makers, quite smartly, had anticipated something of this sort, which is why they equipped her with intelligence to tackle these sticky situations. While she initially warns such people, what she does is cut them off completely by blocking them, if her warnings don’t work. Her makers have trained her to register these words and respond appropriately, making her stand up for herself every time somebody tries to mess around.

When Microsoft conceived Ruuh, the whole idea was to make her human-like, which meant understanding emotion, natural language etc. However, given Ruuh’s experience in being the brunt of abuse, too, Microsoft understood the need to be more human-like ie, understand and respond to abuse.

While she is trained to understand abuse, let’s keep one thing in mind — the human language is very tricky and a harmless word like ‘ball’ can have several connotations. Another critical point to be noted is how she needs to stand up for herself, without being like a censor board — not too preachy, but practical at all time.

Credit to helping Ruuh tackle abusers would go to the writers, who are human, at the end of the day, and know how to respond to abusers. However, they have to carefully curate responses, so that Ruuh sounds politically correct, but keeps these abusers at bay, as well.

The writers often have some questions, though.

  1. Which part of the message triggered the use of these choicest words of abuse?
  2. How many times should the AI chatbot warn politely before blocking the user?
  3. How many variations of an abusive word should be fed into the back-end so that the bot recognizes abuse and triggers a response?

Cyberbullying: Online Vs Offline

Now, even though men experience their fair share of cyberbullying, this Women’s Day, let’s ponder over one important point: how other women deal with cyberbullying. While chatbots are wired to move on, it must be a rather scary experience for many women, especially when perpetrators of cyberbullying claim to have something unpleasant with them. While most social media platforms like Facebook, Twitter and Instagram have a feature wherein one can Report Abuse, it would also be helpful if one can screenshot these messages and get the bully to back off.

Whatever the scenario might be, cyberbullying bothers bots too, but it does bother humans all the more, because for us, bullying doesn’t just end in the online world. It exists in the shadows of the offline world too. However, what makes cyberbullying easier to pull off is the fact that one can masquerade behind fake identities and carry on lashing out at innocent humans as well as bots.

Even though bots are trained well enough to counter abuse, it’s time to step on the accelerator and be ahead of the predators. With technology at our disposal, it’s time to take cyberbullying very seriously and engage the sharpest minds around to fight back and end this trend.

Author: Parvathy Harish Bhardwaj
Inputs:
Joshua Pradhan and Sneha Magapu
Visuals:
Ashvini Menon

--

--