Mental Health in the Modern Age
Social media and mental health. Let’s talk about it.
Especially for the young and impressionable (read: teens), there’s a few very real ways in which our online social activities affect mental health. Before we get too detailed, it’s worth mentioning there isn’t a ton of hard science on this. That’s perhaps the most interesting part of this whole debate: what isn’t being considered rather than what is.
Take, for example, the fact that tons of platforms promise a safe and secure environment, but don’t say all that much about exactly how they define “safe” and how they maintain it. Not only is there a lack of transparency, sometimes there’s not even a way to get in touch with someone at the company. More and more often “support” is becoming a set of forms buried deep within a user forum somewhere.
It’s more important than ever to understand what’s up with mental health and social media, without all the sensationalism. Let’s start with the facts first.
What’s the actual negative effect of using social media on me or my kid’s mental health?
For most teens, the biggest risks associated with social media seem to be sleep deprivation and cyberbullying, as well as some less-easy to quantify negative feelings like increased FoMO (Fear of Missing Out), anxiety, depression, and body image concerns. This information is all from a recent study by a UK independent charity called the Royal Society for Public Health. Their Young Health Movement (YHM) report about teens and social media uses some pre-existing research and adds to it a survey of over 1,000 14–24 year old UK teens and young adults.

Now, it’s easy to scoff at what seems like the obvious. Of course tap-tap-tapping away on our phones late at night affects our sleep. Everyone knows that. However, one fifth of the young people surveyed admitted they actually wake up in the middle of the night to check social media. That’s a minority, fair enough, but that still means potentially 20% of UK youth can’t even survive a single night without checking in online.
So when YHM says social media is problematic to sleep and that cyberbullying is a real issue, that’s not click baiting and fear mongering, it’s the identification of a serious problem. According to their survey, 7 in 10 young people experienced cyberbullying, and 37% of them reported being bullied with “high-frequency.”
Not all networks are the same.
Every social network has its own flavor, its own way of impacting our mental states, for positive or negative. Facebook is reportedly the worst for cyberbullying. It was also the worst for exacerbating depression. Instagram was the potentially the most harmful overall, especially for body image and anxiety, and Snapchat was the worst for intensifying FoMO. That’s right, FoMO is real, and YHM measured it.

All the big five social networks — YouTube, Instagram, Facebook, Twitter, and Snapchat — are great at promoting community building, self-expression, and emotional support, but interestingly YouTube was a head and shoulders above the rest. It was the only network to be rated as more positive than negative, and unlike the other 4 networks, all of which exacerbated loneliness, anxiety, and depression, YouTube actually had a positive effect.
Plus YouTube was highest rated for helping young adults develop self-identity and awareness of other people’s health experiences. Snapchat on the other hand, actually negatively affected teens’ awareness and access to other people’s health experiences.

Interestingly, the YHM report didn’t look at messenger networks like Facebook Messenger and Kik, or at calling and video chat services like Skype and FaceTime. Networks that provide real time, private discussion are much more effective at generating real conversation rather than often empty social engagement in the form of likes or streaks. In the future, it would be useful to see some more nuanced data on how these types of networks affect mental health in young people, and whether they have the same net negative effects on teens more curated networks like Instagram do.
Each platform handles mental health differently.
Just as each platform affects our mental health differently, each platform handles issues of mental health differently. Instagram is an especially interesting case. This year Instagram released a reporting feature specifically for mental health. If another user has anonymously reported one of your posts, you get a message that looks something like the one seen here.

Instagram also has over 60,000 banned hashtags (like pro-self injury #selfharm, pro-anorexia #thinspo, as well as a number of deeply coded hashtags like #blithe) that trigger a Content Advisory Warning and forward users to support pages in their help center when queried. Instagram’s terms of service were also updated to specifically prohibit glorifying self harm. It’s an acknowledged problem, to say the least.
Instagram also ran a campaign in May of this year around these issues. You may have seen some posts with hashtags like #HereForYou, #ItsOkayToTalk, #MentalHealthMatters, #RecoveryIsPossible, and #EndTheStigma in attempts to open conversation about suicidal thoughts and depression, and to lessen the stigma around mental health issues. It’s unclear whether this had a real impact on improving teens’ ability to confront, discuss, or seek help for mental health struggles, but it’s a step in the right direction.

Instagram isn’t the only platform to take action. Several platforms link to suicide prevention hotlines and crisis management nonprofits. Some of the newer platforms with teen-heavy user bases, like random Snapchatting service Monkey, outline specific policies on cyberbullying. Almost every network offers in-app reporting, a team of moderators, and clear community guidelines.
It’s not enough though.
We can’t depend entirely on the platforms to address mental health issues.
According to the YHM, the overwhelming majority of young adults who use an in-app report feature say no action was taken.
This can seem maddening to a user, but from a developer’s perspective, I understand why platforms have difficulty policing their communities. The sheer volume of requests the biggest platforms receive on a daily basis is enormous, and the queue of content to be reviewed and moderated is a never-ending firehose, only some of which is legitimately in need of moderation.
In addition to manual (sometimes called “passive” moderation), developers can also consider automatic tools like advanced filters, blacklisted words and tags, and even machine learning to help thin the queue of content to be moderated by a real, live human. These more “active” approaches, while helpful, aren’t anywhere near complete solutions. Platforms that already employ these techniques still struggle.
For one, there’s lots of cultural nuance to overcome. Lots of relevant hashtags aren’t obvious to those not in the know. Across all social networks, innocuous looking #deb and #annie have come to mean depression and anxiety, respectively. Some tags, like #cat, are only relevant in context, since plenty of posts and tweets tagged “cat” are from pet owners and animal lovers.

Some platforms, including Facebook and Instagram, are employing Artificial Intelligence to help filter out at-risk content (like teens in need of support) and disturbing content (like decapitation videos or pornography) from the pipeline of other content. Even though AI might be able to identify potentially depressed users with a fairly high level of accuracy (reports of 70% accuracy in the case of AI Instagram is testing), in plenty of cases real people can quickly outsmart a computer with a simple spelling change. #selfharm becomes #selfharmm and #blithe becomes #ehtilb.
According to a study published in the Journal of Adolescent Health focusing specifically on Instagram, despite a growing list of flagged hashtags, only 1/3 of hashtags related to self injury actually generated Content Advisory Warnings that might help point troubled teens to support networks and other mental health resources. Add to the the fact that new coded tags like #mysecretfamily are developing every day, and it gets even trickier. The future looks like a combination of techniques in addition to the “report-and-wait” method most platforms currently depend on.
And then there’s the larger question of ethics.
Whether actively moderated by AI or passively moderated by teams of humans who review flagged content, someone has to make up the rules. Someone has to decide what constitutes “safe” content and to what degree we should bury unsafe content. Adrian Chen beautifully captured the dilemma in a Wired article he wrote way back in 2014.
“Beyond the psychological toll moderators face, there’s an enormous burden of judgement: they have to distinguish between child pornography and iconic Vietnam war photos, between the glorification of violent acts and the exposure of human rights abuses. Decisions must be nuanced and culturally contextualized.”
Some platforms have fared better than others. Just as an example, let’s look at Facebook’s livestream suicide trend.
Numerous acts of self-harm have made their way across Facebook, often remaining online for hours, even days in some cases. The livestreamed suicide of James Jeffrey was online for 2 hours and viewed over 1,000 times before being removed and seized by the authorities as evidence. The issue reached a fever pitch when it was revealed Facebook’s official policy is has been to allow live streaming of suicides.
According to an Australian ABC news report, “[Facebook doesn’t] want to censor or punish people in distress who are attempting suicide.” On the other side is the argument that exposing sensitive content to viewers has the potential to be deeply damaging. There are numerous studies about how viewing self-harm content in can trigger self-harm behavior. While Facebook and other social networks may have the potential to intervene, getting those in distress help at a moment of crisis, it also has the potential to traumatize. The best course of action is increasingly unclear.
Of course, it’s not just suicides. Even though Facebook employs thousands of moderators, there’s been a number of livestreamed crimes, attacks, murders, fatal shootings, drive-bys, and racial abuses. Facebook’s livestreaming feature has been a target for both a rising trend of disturbing content, and a huge debate whether/how quickly/in what cases Facebook should remove disturbing content in a timely manner.
It’s not just up to engineers. It’s up to all of us: peers, parents, and platforms.
That’s why YHM is asking for a series of social media platform and policy changes that may help. The first is a popup message that alerts users when they’ve surpass a certain threshold of social media usage. The warning would let them know they’re a heavy user, and that there is a connection between social media usage and the worsening of certain mental health conditions. Another alert on platforms would indicate when a photo has been digitally manipulated, to make users aware of body images that are too fantastic to be real.
The majority of young people seem to be in support of these changes. According to YMH, 71% of young people are in support of pop-up warnings on social media, and 68% are for highlighting manipulated photos.
Then there are some other UK-specific suggestions — that the National Health Service disseminate information about the risks of social media, that safe social media be a required educational subject in school and in youth worker training — and a call for more research on the subject.
Education is an important first step.
For parents, just knowing the coded language distressed teens use can help. Knowing what risks are really out there can help us make more informed decisions about how we use social media. It can also serve as a starting point for often more difficult but necessary discussions about mental health.
