Machines don’t get PTSD — but can they really fight online abuse?

Sasha Haco
Oct 8 · 5 min read
Image for post
Image for post

The internet is both one of the greatest assets and most destructive innovations of the modern age. It has enabled us to socialise, work, shop, workout, even have doctor’s appointments, all from the comfort of our own home. Yet it has also become a platform for publicising some of the most haunting acts committed by humanity. Last month, TikTok struggled to take down a video of a live-streamed suicide, and this is just one in a string of traumatic events that have been witnessed on tech platforms in the last year. Currently, human content moderators who are employed to review reported material suffer PTSD as a result of their work.

So why doesn’t a better solution exist?

Put simply, because truly effective moderation of user-generated content is an almost impossibly complex task. It is both technically challenging and practically hard to manage. That same suicide video was shared and re-shared across numerous social platforms, often with subtle changes that were deliberately incorporated to evade detection.

It is an arms race between the individuals disseminating the video and the platform itself.

Once the initial video has made it online and been downloaded onto users’ phones, no amount of human effort can fully prevent its circulation. The only way to truly stop the spread of such disturbing material is to screen and detect this content at the point of upload.

Image for post
Image for post

Given the enormous volume of content constantly uploaded to the internet — over 80 years’ worth of new video footage is uploaded every day — such screening requires an automated approach.

This is exactly what we are working on at Unitary: we are using artificial intelligence to develop technology that can automatically detect harmful content and improve online safety.

However, automated moderation is no simple task. While existing algorithms have become adept at detecting single objects or homogenous actions, moderation requires the interpretation of far more nuanced cues.

Firstly, content itself comes in many forms: understanding a single video post may require the simultaneous interpretation of the visuals, sounds, speech and on-screen text, as well as any titles or captions. Each of these modalities present their own technological challenges — recognising what is harmful in text alone is itself a difficult task and one which remains the subject of ongoing research. The development of multi-modal algorithms that can process and learn from numerous cues, in order to understand posts such as the video described above, requires another level of technical sophistication.

Image for post
Image for post
Some amusing examples from Facebook that show how an image can change the interpretation of text, and the meaning can only be understood once both are taken into account. While these are harmless examples, it is not hard to imagine more sinister versions which would also require such multi-modal analysis.

Another challenge involved in developing effective moderation is tackling the numerous different types of harmful content: from racist comments, to video footage of violence, to images of child abuse.

There are, unfortunately, far too many ways to cause harm online.

For each type to be detected, we must train our AI models on millions of examples and fine-tune our algorithms to pick up precisely what we are looking for. One question we have to ask ourselves is whether it is even possible to obtain enough training data for every type of content.

A further problem that we must tackle is the subjectivity and context-dependence of harm.

What is considered harmful changes over time — words take on new meanings, people endlessly create new symbols, hand gestures and memes. Moreover, seemingly similar content can be either damaging or benign depending on a multitude of other factors: comments that are meant ironically will be interpreted very differently to those which are not, guns could feature in videos of a real-life a massacre or a movie scene, a nude portrait could be captioned with a feminist message or the abusive rhetoric of a sexist troll.

A post’s meaning can be fundamentally influenced by a wide range of inputs such as tone, language, culture and/or other context. We need to grapple with all of these issues in order to truly identify what constitutes harm.

Image for post
Image for post
A father posted this photo of his daughter at a Connecticut gun store on Facebook. It sparked outrage and made headlines as it was taken close by to Sandy Hook school, the site of a shooting in 2012.

No-one could have predicted such misuse of the internet. Regulations did not keep up with the pace of rapid development and growth of online communities, and we are now in a position where policy-makers, business leaders and technology all struggle to keep up.

We find ourselves asking how Facebook went from a platform to connect university students into an effective vehicle for intervening in the US election, or how TikTok has become a source of creativity and joy, as well as toxic, traumatising material.

And who is responsible — the users, regulators, or the platforms? How should harmful content be dealt with?

Whatever the correct responses, it is clear that we must urgently act to build technology that can detect, and therefore prevent or remove, the disturbing material that makes its way onto our computer screens.

The moderation fight will never be over, but we must constantly innovate to stay ahead.

At Unitary, we are deeply motivated to address this problem and are energised by its complexities. We are developing our own novel technology which we believe will create a step change in the possibilities of online moderation and lead to real, long-term improvements. We are determined to tackle each of the challenges I’ve outlined above, to strengthen online communities, defend platforms, and protect the public from online harm. We strive to make the internet a safer place, for everyone.

Reach out to us here: contact@unitary.ai

Unitary

Building technology to power online safety

Sasha Haco

Written by

Unitary

Unitary

Unitary are a computer vision startup that is working to automate and improve online content moderation. We are developing novel algorithms to strengthen online communities, defend platforms and protect the public from online harm.

Sasha Haco

Written by

Unitary

Unitary

Unitary are a computer vision startup that is working to automate and improve online content moderation. We are developing novel algorithms to strengthen online communities, defend platforms and protect the public from online harm.

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store