Instagram’s Graphic Self-Harm Content Ban Is Not Enough

Shannon Bullen
The Public Ear
Published in
5 min readAug 26, 2019

Instagram’s recently enacted self-harm content moderation measures are ineffectual and inconsistent. The platform needs to invest more heavily to protect its most vulnerable end-users and fix its image problem.

Source: pexels

I’ve had an Instagram account for years. I have been blithely posting, clicking and scrolling away without giving much thought to the perils of the platform. Recently, however, my Insta-bubble has burst. The photo-sharing social networking service is facing increasing public criticism synchronous with a spate of incidents where distressing content has been posted and circulated on the platform unchecked. As critics declare tech companies have a duty of care to keep children safe, I can’t help but think of my own daughter, approaching those impressionable tween years. My perspective has shifted, both a parent and future public health professional.

The Catalyst for Change

In February, Head of Instagram Adam Mosseri announced a ban on all graphic self-harm imagery, such as cutting, on the platform in response to public furore over the suicide of British teenager, Molly Russell. It was revealed prior to taking her own life, the fourteen-year-old had accessed distressing content about depression and self-harm via her Instagram account.

The ban includes measures that aim to block graphic self-harm content via searches, hashtags, the explore tab and account recommendations. Non-graphic self-harm content will also be harder to discover through algorithmic adjustments that will no longer boost such content. “Sensitivity screens” to blur graphic imagery have been introduced to ensure people do not unintentionally find and view self-harm content, whilst pop-up messages issue warnings to users and direct them to support.

The problem is these measures are inadequate and easily by-passed. Instagram’s terms of service indicate that a user must be 13 years old, but as there are no age-verification measures this means it is simple for a younger child to sign-up. Cue: parental angst for yours truly.

Source: Instagram

A Dangerous Link

Now before you write me off as a “Save the Children!” zealot spiralling in a moral panic, hear me out. It has already been well-established that media depictions of suicide can increase the risk of copy-cat suicides, known as “the Werther effect”. If you recall, the Netflix-original program “13 Reasons Why” was widely and publicly condemned for dramatising teen suicide and it has since been reported that the U.S. suicide rate for young people jumped significantly in the month following the series release.

Recent literature reveals how exposure to online self-harm imagery in particular serves to normalise self-harm behaviours for young people and can invoke physical reactions that inspire a desire to self-harm.

Now, links are being made between exposure to self-harm content on Instagram specifically and subsequent self-harm behaviour. A recent survey of over 700 young adults in the U.S., published in the journal New Media and Society, found exposure to self-harm content on the platform predicted subsequent self-harm behaviours and increased risk of suicide in the study population. As more evidence such as this emerges, will Instagram extend its protective measures further?

Profiting from Pain

As Forbes tech writer and contributor Kalev Leetaru observes in the wake of the Bianca Devins case, social media platforms profit from horror. In the same way, Instagram profits from self-harm content.

Previously there has been very little incentive for Instagram to invest in better filtering technology, with no law or regulations requiring them to do so. We cannot expect self-regulation from tech companies when doing so is expensive and would disrupt their fundamental business models.

What’s more, Instagram’s dependence on algorithms and hashtags to detect self-harm content is a flawed system. As has been noted elsewhere, hashtags have limitations as a moderation tool and users will invent obscure hashtags to continue to post content that violates Instagram’s policies. Algorithms are unable to factor in context and intent when making censoring decisions and this lack of sophistication gives rise to further problems.

When non-graphic images of self-harm such as healed scars were found to be censored by Instagram, the hashtag #youcantcensormyskin ignited debate about which images and content are objectionable. Many decried this action as stigmatising to those in recovery from self-harm practices and pointed out there is a difference between self-harm promotion versus promoting self-harm awareness. Concerns exist that the content ban could in fact, harm young people by limiting avenues for self-expression and support.

Source: Instagram

Hashtag #fail

Six months on from the content ban, recent news reports reveal that Instagram is still littered with suicide and self-harm posts. It took me approximately 10 seconds to locate graphic self-harm content without putting much thought into a search strategy, which shows just how ineffectual Instagram’s current moderation mechanisms are. It seems Instagram does not have the inclination nor capacity to pro-actively and appropriately moderate content to protect its end-users, particularly those that are most vulnerable.

For an entity reportedly worth more than $100 billion, Instagram has the means to moderate the content from which it generates its massive revenue, but not much incentive to do so. As more research emerges to illustrate the ill effects of exposure to harmful content for Instagram users, the only solution may be government intervention to stop this wilful negligence. If not, this mama may have to implement a ban of my own- on Instagram for good.

If you need support or information about self-harm or suicide prevention, help within Australia can be found here. For international support, click here.

--

--