Are we searching safe?
Parental (out of) control
Last week YouTube announced it was cracking down on inappropriate videos aimed at children on the platform. The videos, which feature well-known characters often in indecent situations, are designed to capitalise on YouTube’s algorithms to reach large numbers of children, generating ad revenue. A widely-reported example is of an unfortunate dental visit by Peppa Pig.
The recent news about inappropriate content on YouTube has drawn fresh attention to the topic of child protection online. This week’s mailer will assess parental control in an age where children have unprecedented online freedom.
Increasing digital independence
Generation Z (generally agreed to be between 6 and 21 years old), have unlimited independence in the digital realm — facilitated by the proliferation of connected devices like tablets and smartphones. Millennials might be the digital pioneers, but Gen Z are the digital natives.
The web brings countless sources of educational content, entertainment and means of self-expression. However, there is a dark side, from exposure to inappropriate content at young ages to the adverse effect of certain social media platforms on young adults’ wellbeing. Last year research commissioned by NSPCC found 53% of children aged 11–16 had seen explicit material online, whilst 14% had taken nude photos of themselves.
The harmful effects of social media on children have been widely discussed. A recent study by the Royal Society for Public Health and the Young Health Movement found that four of the five most popular social media platforms have had damaging effects on mental health and self-esteem. YouTube was the only platform deemed to have an overall positive effect.
Freeeeeeeeddddoooooommmmmmmm vs safety
Given that children have freedom on devices, parents struggle to monitor their online consumption. Parents set boundaries, but are forced to trust their children will obey them. Some monitor their child’s online behaviour using content filters, impose rules restricting internet access and downloading apps, introduce ‘walled gardens’, or implement advanced social media and mobile supervision (that can feel more 1984 than Ben 10). Although, these are not widely used — just over half of parents are aware they can enable content filters on the internet, and only a third currently use them.
Yet because of children’s independence, direct supervision is almost impossible. For example, whilst only one in four parents claim to sit next to their children and directly monitor them online, a third of parents check their children’s browser / device histories. As such, parents are usually unable to take proactive steps to influence behaviour; only acting after overhearing / seeing children watch inappropriate content, or after a child has come to them with specific concerns.
As a result, child psychologists have stressed an open dialogue with children is necessary to understand online activity. As Ofcom found last year, 84% of parents have talked to their children about managing online risks. This is an important start.
Tackling through learning
There have also been a range of impactful initiatives aiming to help tackle this issue. Kaspersky, a Russian multinational cybersecurity service, offers content guidelines for every age group, encouraging a dialogue around online child safety. SafeToNet, a new entrant in the space, places this discourse at the centre of its message. And, with the government releasing a new targeted approach to tackle internet safety, there are now progressive solutions and guidelines to help parents to start conversations about online safety, concerned their children might stumble into Charlotte’s dark Web.
If you, would like to discuss safe searching or any of the other topics discussed in this mailer, please don’t hesitate to get in touch.