Silent guardians (of our sanity)

Monika Mani Swiatek
My 52 problems
Published in
6 min readFeb 25, 2020

We all know that the Internet is a Wild Wild West where everything may happen, but living our lives and digesting what social media serve us we can’t see that now. We take things as they are: funny kittens, crazy tweets, memes and other stuff which makes us laugh or angry but in a moderate way.

We are safe here, the content which reaches our screen is filtered.

There are things we wouldn’t like to see in our social media feed (don’t worry picture is mine)

Things we don’t know about

You probably heard about horrible video recordings of decapitation or cruelty towards animals that appeared online. We don’t see that now (or at least not so often.) We don’t have to worry that scrolling Facebook feed or looking for nice music on YouTube we’ll come across a hardcore porno or another violent video which may take us by surprise (what would be even problematic if we’re scrolling on the desktop in the office or a cafe).

But it’s not because it doesn’t exist, it does but there are people whose task is to stop it as soon as possible reviewing flagged posts. What they do is called “content moderation”.

The new is coming… and causing new problems

Technological progress is told to be taking over jobs, but it also creates new challenges that technology cannot cope with and requires input from a human being.

One of the nightmares of big social media companies is the content which may put people using it for pleasure or potential advertisers using it for business off. AI is not yet (and for a long time probably won’t be) clever enough to assess if the video which someone posted presents scenes which are ok or shouldn’t be exposed to people’s eyes. Big tech companies after few fails when graphic scenes from terrorist attacks or other violent events were available online realised that it will hurt their business (I don’t believe they are worried much about the wellbeing of users). They decided to harness human intelligence and give them a task to moderate a content and filter all the “bad stuff” out.

Content moderation sounds like a “normal” job, but it’s really far from it. This job is made by people who full time need to watch disturbing videos, sometimes even multiple times over and over to assess and reassess if these are violating terms of use.

The grey area

Imagine, you’ve received an offer to work for Google, Youtube, Facebook, (you name it) you’re happy you’ll be working for a big tech company just after graduating!
Your enthusiasm is a bit dimmed by the fact that an offer is most likely a contract and you’re not directly hired by a big tech but by their contractors and alongside your contract, you have other documents to sign… the one where you acknowledge that you know that your new job can cause post-traumatic stress disorder (PTSD)… and a non-disclosure agreement stating you can’t talk about it with anyone even your co-workers. Oh, and the salary is one of the lowest in the industry (especially when you work for the company overseas where content moderation is outsourced quite often). You probably won’t be in the main company building as things you do are bit dodgy so you work in the less attractive area, you’re a bit like someone who has to be there but everyone are embarrassed by your presence so need to keep you hidden in the basement…

Here’re comes the new… and unregulated

Content moderators who are monitoring objectionable materials often view hundreds of disturbing images all day long, 5 days a week. Professions who are exposed to this kind of graphic violence, like first responders who may be witnessing human suffering even a few times a day have special policies and procedures which aim is to support them and care about their mental health and wellbeing. After a traumatic event, they have mandatory sessions where they are processing what happened so they won’t be replaying that in their head over and over. Content moderation is quite new profession and there are no regulations which would help support them in a way which would be proportional to the risk and pressure they’re under.

In theory, moderators have access to support services, including a hotline and a wellness coach, but nothing is provided by medical professionals who may be able to treat mental disorders. You could look for private health care, but considering the low salary which this job comes with, it’s unlikely.

Imagine all of that and the fact that you signed up a non-disclosure agreement so you can’t seek help as you can’t speak about it (even if you have money)?

Recruiters are cheeky and are hunting for fresh graduates who have all these colourful visions that working for big tech will be like their dreams come true.

Usually, they are leaving after 3, 6 months.

It’s not something where you can be happy and think about career development. Everything you would do would involve watching horrible videos either as a content moderator or a team leader reviewing videos of your colleagues.

The human factor

Companies benefit from human input, their ability to review the content using common sense, critical eye and empathy, but too often these pictures stay with them for long. It’s not the career you can drop and move on.

I won’t give you examples, but you can read stories of people who recently sued Facebook

You may say, they knew what they’re going into…

People who are starting this job are not prepared for that. When someone is about to work as a homicide detective (who may need to watch disturbing scenes) they need an experience as a police officer and training which should prepare them for the trauma they can experience at work.

Imagine yourself working as a content manager somewhere and moving to content moderation in one of these nightmarish departments…

Tech companies do not recognise all the risks that this kind of job brings.

A 9 to 5 nightmare

Imagine, going to work and from the start of your shift till the last minute of it you’re watching every suspicious video. You watch it in details, if required you re-watch it to assess if what it shows can stay on the Internet or should be taken off. You watch videos or rape, brutal violence, child porn and other things which I even don’t want to imagine.

When it’s 5pm you go back home to your family trying to forget things you saw.

It’s not a single experience, it’s not a trauma episode it’s constant exposure to the worst graphic violence. A job which is crucial for companies like Facebook and Twitter who rely on an army of workers employed to soak up the worst of humanity to protect the rest of us. Chew them till they have enough and spit out to the world which will never be the same for them.

Are there other options?

The Internet is in a non-regulated being which has the “life” of its own and has to be surveillanced. The AI — a magic Jack of all trades which is always called in for such tasks cannot do that. Content monitoring is one of the hardest, least gratifying jobs people can do. But it’s a job which as long as we have the Internet, needs to be done.

I think it’s a job that should be recognised, regulated and rewarded accordingly. Big tech companies aren’t ones which budget depends on the government. These companies are focused on profit and should invest in making their product safe for users and be responsible for consequences of what the technology they came up with does to people, employees included.

The internet opened us a door to the future, but for some, it’s a door straight to hell.

This post was inspired by recent Meetup of Anthropology and technology group
where Dawn Walter, the organiser talked to Alan Winfield, Professor of Robot Ethics.

Thanks for reading my 27th story from My 52 problems series, which is a part of #write52 initiative.

If you have questions write here or on Twitter.

--

--

Monika Mani Swiatek
My 52 problems

Trying to decide if I should be a warning or an example to others today... Feminist, sceptic, alleged stoic, public servant and bookaholic trying to write.