Machines don’t get PTSD — but can they really fight online abuse?
The internet is both one of the greatest assets and most destructive innovations of the modern age. It has enabled us to socialise, work, shop, workout, even have doctor’s appointments, all from the comfort of our own home. Yet it has also become a platform for publicising some of the most haunting acts committed by humanity. Last month, TikTok struggled to take down a video of a live-streamed suicide, and this is just one in a string of traumatic events that have been witnessed on tech platforms in the last year. Currently, human content moderators who are employed to review reported material suffer PTSD as a result of their work.
So why doesn’t a better solution exist?
Put simply, because truly effective moderation of user-generated content is an almost impossibly complex task. It is both technically challenging and practically hard to manage. That same suicide video was shared and re-shared across numerous social platforms, often with subtle changes that were deliberately incorporated to evade detection.
It is an arms race between the individuals disseminating the video and the platform itself.
Once the initial video has made it online and been downloaded onto users’ phones, no amount of human effort can fully prevent its circulation. The only way to truly stop the spread of such disturbing material is to screen and detect this content at the point of upload.
Given the enormous volume of content constantly uploaded to the internet — over 80 years’ worth of new video footage is uploaded every day — such screening requires an automated approach.
This is exactly what we are working on at Unitary: we are using artificial intelligence to develop technology that can automatically detect harmful content and improve online safety.
However, automated moderation is no simple task. While existing algorithms have become adept at detecting single objects or homogenous actions, moderation requires the interpretation of far more nuanced cues.
Firstly, content itself comes in many forms: understanding a single video post may require the simultaneous interpretation of the visuals, sounds, speech and on-screen text, as well as any titles or captions. Each of these modalities present their own technological challenges — recognising what is harmful in text alone is itself a difficult task and one which remains the subject of ongoing research. The development of multi-modal algorithms that can process and learn from numerous cues, in order to understand posts such as the video described above, requires another level of technical sophistication.
Another challenge involved in developing effective moderation is tackling the numerous different types of harmful content: from racist comments, to video footage of violence, to images of child abuse.
There are, unfortunately, far too many ways to cause harm online.
For each type to be detected, we must train our AI models on millions of examples and fine-tune our algorithms to pick up precisely what we are looking for. One question we have to ask ourselves is whether it is even possible to obtain enough training data for every type of content.
A further problem that we must tackle is the subjectivity and context-dependence of harm.
What is considered harmful changes over time — words take on new meanings, people endlessly create new symbols, hand gestures and memes. Moreover, seemingly similar content can be either damaging or benign depending on a multitude of other factors: comments that are meant ironically will be interpreted very differently to those which are not, guns could feature in videos of a real-life a massacre or a movie scene, a nude portrait could be captioned with a feminist message or the abusive rhetoric of a sexist troll.
A post’s meaning can be fundamentally influenced by a wide range of inputs such as tone, language, culture and/or other context. We need to grapple with all of these issues in order to truly identify what constitutes harm.
No-one could have predicted such misuse of the internet. Regulations did not keep up with the pace of rapid development and growth of online communities, and we are now in a position where policy-makers, business leaders and technology all struggle to keep up.
We find ourselves asking how Facebook went from a platform to connect university students into an effective vehicle for intervening in the US election, or how TikTok has become a source of creativity and joy, as well as toxic, traumatising material.
And who is responsible — the users, regulators, or the platforms? How should harmful content be dealt with?
Whatever the correct responses, it is clear that we must urgently act to build technology that can detect, and therefore prevent or remove, the disturbing material that makes its way onto our computer screens.
The moderation fight will never be over, but we must constantly innovate to stay ahead.
At Unitary, we are deeply motivated to address this problem and are energised by its complexities. We are developing our own novel technology which we believe will create a step change in the possibilities of online moderation and lead to real, long-term improvements. We are determined to tackle each of the challenges I’ve outlined above, to strengthen online communities, defend platforms, and protect the public from online harm. We strive to make the internet a safer place, for everyone.
Reach out to us here: contact@unitary.ai