Could A.I spell an end for Twitter’s Cyberbullying pandemic?

Contains some explicit content

I reflect on how cyberbullying was virtually non-existent in my youth, at least that’s my assumption of it. For the most part my parents refused me access to a computer, let alone the internet. My online presence was limited to a MySpace account I could access every few weeks when I dropped by my aunts house. Somehow my “IRL” bullies never found me on my small corner of the web, and for that I am grateful.

Now though it seems cyberbullying is the new “in thing”, “all the rage” as they say, it’s the only way I could possibly describe it.

Now to give you an example of the type of abuse rife on Twitter, let’s look at an absolute Troll favourite; the run up to the recent UK general election. Here’s just a handful of tweets aimed at Labour MP Dianne Abbott.

Reading through some of the tweets I honestly felt heartbroken for Dianne. It seems the Twitter trolls forgot that Dianne is human, that humans can make mistakes, or they simply did not care that their words of racism and hate can have a profound impact on another human being.

The thing is, Twitter makes it all too easy for its users to do this. Users know the platform exists for freedom of speech, and Twitter will do little, if anything at all, to interfere with that.

Okay so Twitter does provide the option to block users, but the problem with this is that you can only block a troll after they’ve, erm, trolled you.

The answer to this problem, may well lie in A.I. It’s not a complete solution, and it needs some proper thinking through, but hear me out.

Twitter already has a ton of data on abusive or offensive tweets as there has for some time now been an option to report tweets of this nature. We’ll use this to form the basis of our A.I’s learning.

Now that our A.I has an understanding of the type of content a typically abusive tweet might contain, it can flag to the intended recipient that they have received a potentially harmful tweet. At this point the recipient can choose whether 0r not to view the tweet - which at the very least offers some preparation for what they are about to read.

Should the user choose to read the tweet, they can then confirm to Twitter whether or not the tweet was indeed abusive. Over time, as the user indicates whether our A.I was right or wrong, it’ll learn, leading to less and less harmful tweets coming through; the next time the user receives a tweet of this nature, Twitter simply will not offer it for display.

One potential flaw to this, at least in the early days, is that it still requires the user to see harmful tweets in order for the A.I to learn. I don’t think this can be solved, not entirely, but I do think users can give the A.I a helping hand by offering a few words and phrases they simply do not want to see.

Using this, together with its learning from existing data, our A.I should be intelligent enough to prevent users from seeing the vast majority of harmful tweets.

Of course this could then lead on to possible sanctions for repeat offenders, whether that be an outright ban or the account simply flagged as a troll — both have their benefits.

Like what you read? Give Daniel Meade a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.