The business case for building SocialNetMonitor

Athenagoras
5 min readOct 27, 2017

--

SocialNetMonitor: Fighting Hate Speech on Social Media — supported by Deep Learning

SocialNetMonitor is a Social Media tool designed to stamp out and prevent discriminatory speech on Social Media using Natural Language Processing and Deep Learning Algorithms.

In this post I will answer the Why? as well as the Why Now? question.

Why do it?

The past few weeks have seen an abundance of news on Cyber bullying, Trolling and Politicians demanding the big Social Media operators like Facebook, Twitter and Google do more to stop this kind of abusive behaviour on their platforms and services.

Trolling -and malicious behaviour generally- can appear in two ways: first in live-streaming sites with content disappearing after a short while or on more static sites like Twitter, Facebook and the like. Following are two articles from major news sites dated October 2017, illustrating the problem and its background.

Example for abuse on live-streaming sites

The problem starts getting real when negative online messages or sentiment start sipping over into the offline life. This kind of molestation is no longer irreal or virtual, it soon gets very down-to-earth. Users targeted by these schemes are not only affected as virtual identities, but often face growing real-life repercussions.

On the (supra-)national level, the EU as well as national governments have now taken on the issue, especially after Commissioner Jourova found herself to be the target of a Trolling Campaign on Facebook.

Fighting illegal content online has become an integral part of the DigitalSingleMarket Initiative promoted by Commission and Council. As the security of Community citizens surfing the internet is seen as key to the market’s competitiveness, the EU takes violations — in this case inappropriate or even hateful speech and cybermobbing — very seriously and can infer fines of crushing dimensions.

With the advent of the Estonian EU Presidency in late Summer 2017 this engagement has doubled. Facebook and Twitter have been given clear warnings they are expected to cooperate better in combating illegal content, most of all hate speech as it has the most direct influence on users’ health and security while using the Internet.

Why Now ?

The EU only recently declared illegal online content an ‘urgent challenge’, moving to intensify the combat against it.

The recent initiatives show there’s momentum, as member states take measures to support the EU Digital Single Market Strategy. Shortly before the election recess the Bundestag passed a law requiring online platform operators to increase their efforts to counter hate speech, online bullying and other forms of illegal content.

This in turn raised the necessity for involved companies to proactively participate in tackling hate speech online. Facebook alone announced 3,000 new monitors in May 2017, others are reconsidering their ad and content monitoring staff.

To sum up, both the political and corporate levels recognize the importance of additional endeavours to combat hate speech online. Therefore it may be worth illustrating the different scenarios Social Platform operators face when deciding how to adapt to the changing regulatory landscape.

Different Scenarios on Combating Hate Speech

  1. Do nothing
    Take the status quo as base case and do nothing in addition to current procedure.
    This approach may look attractive at first, however it runs the daunting risk of regulatory upheaval and possibly even punitive fines imposed by national and EU authorities.
  2. Take the route offered by DigitalSingleMarket
    The EU proposes increased cooperation with trusted flaggers, including a reliance on user notifications. As some sort of obiter dictum the DSM unit recommends Operators do their utmost to ‘proactively ’ detect illegal content on their platforms.
    While this is the current EU best practice it really falls short of any meaningful change in procedures. User notifications trickle in at random and managing the trusted flaggers army costs loads of admin time and money while only inspecting a limited amount of content at a fraction of any automated system.
    With governments pushing for more corporate engagement in tackling Hate Speech this approach risks being outdated rather soon, leaving companies exposed to regulatory risks and technical overhauls they’re ill prepared for. Not recommended either.
  3. Take an active approach and speed up Detection
    Take an active approach and detect hate speech early - ideally real-time -using Natural Language Processing and Deep Learning methods. Use the processed data to start building a hate speech dataset for more precise and faster detection. The bigger the dataset gets the more accurate the detection process finally becomes.
    This third and recommended approach takes a lot of admin work out of the detection process while allowing bigger data loads to be inspected in a shorter timeframe. It is able to eliminate any reliance on (random) user notifications or an army of trusted flaggers. These strata can be kept as side-gigs, however, to allow community participation in growing the dataset.
  4. Advantages at a glance
Advantages of SocialNetMonitor against current procedure

Why SocialNetMonitor? In a nutshell

Think SocialNetMonitor —
it’s your all-in-one tool to weed out hate speech on Social Media.

Inappropriate speech on Social Media is a growing problem with dangerous and potentially lethal consequences for those affected. Victimization can apply to anyone from any background — from underprivileged schoolchildren to Company Directors and even EU Commissioners.

Due to the media and political attention the issue is currently enjoying, now is the right time to start tackling the problem on an automated scale leveraging the power of Natural Language Processing, and Deep Learning Methods like RNNs.

Target customers come from diverse backgrounds: Platform Operators interested in faster and sleeker processes, Authorities trying to get a grip on this new area, NGOs determined to follow up on their pledges to fight discrimination and cyber bullying.

So join in and help make the Social Sphere a safer place. It pays.

What’s next?

The upcoming sequels will comprise articles about NLP, Social Media policy — and maybe even a bit of football. Looking forward to it!

--

--

Athenagoras

Hi from Athenagoras! Doing ML, NLP and WordPress. Interested in helping people dazzled by data.