Self Harm on Social Media and Policing the Internet.

Trigger Warning: content and links in this post that could upset or trigger those thinking of self harm.

The BBC invited me onto one of their news programmes to talk about a disturbing case. Tumblr is being petitioned to take down self harm profiles. The posts on these profiles are grim. Pictures of self cutting and memes about suicide adorn them. They are full of self loathing statements like, “ If I died tomorrow, remember what you said today”.

The full extent of sadness and depression afflicting these young minds is on display in these blogs. What’s worse, or better, is they are using social media to connect with one another and share their feelings and images self hatred.

The first knee jerk instinct is to try and stop it. The jolt of seeing blood ooze out of a young wrist is frightening, particularly for parents. If young vulnerable people see this, if it is normalised, will it lead to copycat self-harmers and triggering those who may have stopped or sought professional help? Or is this something that happened well before the Internet wove itself into our lives, these forums just allowing otherwise horribly isolated individuals to connect and offload their worst thoughts and feelings to people who understand better than most?

There is no way of simply switching this content off and if we could it wouldn’t fix the problem. In the days of the dangerous pelvis Elvis, parents took away teenagers record players. Kids simply went out to friends houses or tuned into the pirate radio stations that were playing his music. Pulling the plug on these sites or posts isn’t a realistic option. Users can set up another blog in three minutes on this site or the dozens of others out there. they can set up a blog for free like the one I’m writing on right now.

There does seem to be a growth in this content on Tumblr. The company have been asked by government to help deal with the situation. They have made efforts to curtail this kind of information on their platform. If you go to Tumblr and search “Suicide” you get a kindly message with links to help organisations (only US ones at this time). Their terms of consent include details about how these kind of posts are not within the community rules.

This hasn’t stopped the pictures and the posts seeping through. Tumblr has become the community, almost a cult, hub of self harming information. This Buzzfeed article from 2013 does a good job of articulating some of the positives for self harmers in blogging. They feel less alone, they feel supported by their community, they feel like these are the only people they can talk to.

Whose Ethics Decide?

Who is going to decide which profiles and posts are triggering or morally wrong? The government has higher priorities on their list, terrorism to name one, and, in the UK Conservative party, an odd obsession with certain types of pornography. They also don’t have thousands of digital experts hacking and monitoring the billions of social posts per day made in the UK. If they did, would we want our government deciding what we should and shouldn’t see based on their own moral and ethical codes, based on what the government think is good for us?

That leads us to the social network. Should our bro friends from Silicon Valley decide what we are and aren’t allowed to access? How are they going to determine what’s morally questionable and what is unacceptable? Where will the draw the line? Is a post about feeling a bit low ok, but not one that says they think their friends hate them and want them to die? Is a post that shows one cut allowable but not one that shows cross hatching cuts? This is the point where people shout “Algorithms! They can write algorithms using Artificial Intelligence. That will solve the problem.”

“Computer, find self harmers and stop them”

We can write algorithms now that could seek out some, though not all, of this kind of content and alert the web site or the user. At present we can process text using natural language processing (NLP), a way of coding a computer to understand natural language (not just key words) linked to Artificial Intelligence. You can sift through it and surface content. It’s smarter than keyword search, which is why so many people see it as the holy grail of Internet regulation. However, NLP is a new discipline in coding. It isn’t at the point where you can get it off the shelf. It takes months of planned careful work with highly qualified academic engineers to build one algorithm that will work in one specific way.

Much as we like to imagine it, we are not yet at the point of saying “Computer, find self harmers.” Image recognition is harder to do than text. We are asking a computer to differentiate between a picture of a person with a cut on their arm and a medical example showing how to bandage a wound. They can’t do that yet.

When computers can do it, and it isn’t too far off, how much of the decision making do we want to give it? What if the person posting the item on self harm is trying to help someone else? Shutting down conversation and communities, making mental health issues taboo doesn’t work. That way leads directly back to Bedlam .

When dealing with vulnerable teenagers, shutting it down is not an option and it’s a bad idea. Instead we need to find ways to help them in the world in which they exist, not try to drag them out of theirs by the ear. That world is online. It’s their tight nit group of friends who are suffering from the same problems.

Mental Health apps abound. Online counselling is widely available, even in the UK . If social media platforms are going to do more to help in this area — and they should, then a psychological intervention could prove more helpful than a ban.

When a user starts posting self harming content the social network could steer that person to group of people towards help in a number of ways.

Psychological profiling has been used to great effect in recent times, particularly in politics. Whilst we’ve so far only seen this information used for nefarious purposes, it could be used for a great deal of good. We know that the sensor on our smart phones, for example, can reveal how quickly we move, where we go and how far. This information alone can provide indications of emotional stability or instability.

We already have it in our power to to help vulnerable people online. The government, UK or US, are the last people to understand how or care enough to implement it. That leaves us with each other, the community and those who run them.

What we can do.

You can’t just shut it down. Google “petition Tumblr self harm” and you will find a bunch of petitions asking Tumblr not to take down self harm posts as well as those that do. The only people who can really make a judgement call on each and every post are trained mental health professionals.

Perhaps it’s time to implement good mental health practice into social networks. The networks themselves need to connect with mental health charities and professionals. At this point algorithms can become enormously powerful, pointing the right people to the problem. Social networks and governments do have some responsibility to help vulnerable people. Rather than looking at ways to close it down we need to reach out to these dark places on the net. Give people the support they need to talk to a professional openly about what they are struggling with . Make it accessible from their keyboard, from their safe space. It’s kinder, smarter and easier than trying to shut them up.