How can we protect ourselves against social media toxicity?

Beverley Glick
Zgmund
Published in
5 min readApr 25, 2021
Nasty words on back of jacket

I was in the latter stages of my career as a national newspaper journalist when the internet went mainstream, and at the very end when social media arrived. I remember feeling grateful that I had grown up in the analogue age when it was easier to protect your privacy.

When I signed up to Facebook in 2007, I was careful about the content of my posts and the comments I made on other people’s posts. Why? Because I had spent years working in the media, with its stringent codes of conduct and libel laws.

I became increasingly horrified not only by the vitriol that started to be spewed on social media in general, but also the level of disclosure many (especially younger) people seemed to be comfortable with.

But the anonymity offered by social media profiles has shielded many from taking responsibility for their comments. If you don’t think you’ll be held accountable, you might risk saying something that you wouldn’t dare utter in a face-to-face conversation.

I’m all for the democratisation of publishing, but there is a dark side to it. And here we are in 2021, with toxicity on social media having become a huge and seemingly insurmountable problem, exacerbated by our post-truth era in which facts are debated and news can be fake. Add trolling and cyber-bullying into the mix and you have a near-deadly amount of poison in the system.

Words have consequences

In the digital age, words have become weaponised — but we have seen time and again that words have consequences, especially in the political and cultural arenas.

If leaders use words as weapons, their followers are going to take that as permission to throw word-bombs themselves.

But if we want to stem the flow of poison, the responsibility starts with us and the language we use on social media. What might seem like a flippant comment to us might come across to the recipient as deeply hurtful, even if that wasn’t our intention.

Toxicity has resulted in 41 per cent of Gen Z social media users saying that social media makes them feel sad, anxious or depressed.

So how can we begin to hold ourselves accountable for the words we share online?

In a recent Forbes article, Carrie Kerpen, CEO of digital agency Likeable Media, suggested five ways in which we can spread joy on social media and reconnect with its original aims of connecting people and creating community:

  1. Pay a friend a compliment or send a message of encouragement.
  2. Promote someone else’s work.
  3. Give an unsolicited recommendation.
  4. Spread the word about your favourite charity.
  5. Share gratitude for the people in your life.

So yes, we can take personal responsibility for what we publish, but what about the social media platforms themselves?

The vital role of AI

It has been widely reported that Facebook employs moderators to check for harmful content — and that they are at risk of PTSD as a result.

In fact, just last week, the platform said that its staff were working around the clock to identify and restrict posts that could lead to unrest or violence after the verdict was announced in the murder trial of former Minneapolis police officer Derek Chauvin.

It seems they were admitting to dialling down on toxic content — for a short while. Which begs the question, what is the dial set to on a typical day?

It makes sense that social media companies are turning to AI to do a job that is so time-consuming and detrimental to the wellbeing of humans.

But it’s not an easy task. According to a recent article in Scientific American, a tool called Perspective API, which was produced by Jigsaw and Google’s Counter Abuse Technology team, faced criticism when its “toxicity score” turned out not to be flexible enough for the varying needs of different platforms. And although these tools are evolving all the time, they still have their limitations.

The following example is given by the author of the article, computer vision engineer Laura Hanu: “We noticed that the inclusion of insults or profanity in a text comment will almost always result in a high toxicity score, regardless of the intent or tone of the author. As an example, the sentence “I am tired of writing this stupid essay” will give a toxicity score of 99.7 per cent, while removing the word ‘stupid’ will change the score to 0.05 per cent.”

You can see the problem here. The word “stupid” is potentially toxic — but that is completely dependent on the context in which it is used. As any journalist would tell you, there is no meaning without context, and that’s what the AI needs to understand in order to make a nuanced judgment.

Zgmund — the complete package

Zgmund AI understands conversation along with the context in which it is shared, thus creating an accurate analysis of what humans feel; while Zgmund App demonstrates how powerful this analysis can be in the context of anonymous emotional support groups facilitated by super-empathic AI.

Toxicity detection is an essential embedded ingredient within Zgmund’s empathic AI, and users are made aware if any of their messages have been blocked due to a high toxicity score. This is one of the measures used to keep the conversation safe and friendly.

“Zgmund’s core value is to protect its users,” says Ohad Gerzi, CEO of Zgmund. “We do that by building a particularly sensitive AI. Toxicity detection is one of many measures our system puts in place in order to create a supportive, psychologically safe environment for participants, and it can also be observed while using our Psychological API.

“The real benefit of this high sensitivity is to address such toxic violations differently, depending on the context of the conversation. The context allows Zgmund to be muted or strongly opinionated about such instances, in the same way that participants can decide how to act in the context of the conversation. This kind of conversational moderation creates the magic of a safe support-group environment, and the most sophisticated, context-aware Psychological API.”

With AI constantly improving its levels of sensitivity and understanding, there is hope that social media will become a much safer and more amicable environment in the years ahead.

Originally published on Zgmund.com, April 25, 2021.
(
https://www.zgmund.com/blog/a006/)

--

--

Beverley Glick
Zgmund
Writer for

Beverley Glick is a former national newspaper journalist who believes in the magic of language and the power of a story well told.