Why the question over setting the Digital Age of Consent is the wrong question.
When I was 13 I had 4 email addresses. It started with an MSN account and a Gmail account. The other two were created for one specific reason — I wanted to download a free demo on the PlayStation store and wasn’t 18.
So, I set up the third email account, but messed up. I registered a new account on the PlayStation store and still accidentally put in my real date of birth. On to the fourth email address and I set my online age as 6 years older than I was. Didn’t even have to change my name. I could finally download the demo, and Sony couldn’t do anything about it.
(And by the way, before someone yells “but if your parents had known…” — my parents and I always had a solid, honest relationship. I just wasn’t bothered to ask them to sign in through one of their email addresses when I could do this so easily myself.)
After the 25th May this year, when GDPR comes into place and our Data Protection Bill is enacted, incidences like the one above will be illegal for people under 16. And according to (at least) one TD, companies will be liable, not the teenagers. This makes sense, partially — it’s unenforceable if we’re expected to penalise teenagers for this. Fair enough. But why is it the fault of a company if someone young blatantly breaks its rules? And how would we enforce that with the company — we haven’t exactly got the best track record for getting corporations to comply with our laws…
There are plenty of concerns around the effect of smartphones and social media on young people’s mental health. But every policy conversation on this issue has to weigh up the pros and cons. Luckily for us, researchers in the UK recently made an excellent, thorough submission to a Parliamentary Committee on this very topic. It’s worth reading the whole piece to get a grasp of where the research currently stands. It outlines that while the research is in its early days yet, the best available evidence supports a Digital Goldilocks Hypothesis, where the screen/social media use for several activities all followed an inverted U-shape (see below). Moderate amounts of social media use and general screen use correlate with higher levels of well-being. The submission also highlighted that most teenagers feel that social media makes them feel more connected to their friends, and that social media is an important place to get support from friends during difficult times.
There are plenty of concerns too — most studies on the relationship between social media and sleep found that they were negative correlated (as the hours for one went up, the hours for the other went down). However, it has also been suggested that young people who are struggling are more likely to start using their phones at night because they can’t sleep. This means that the relationship between sleep and social media is not as simple as one depleting the hours of the other. There are also plenty of concerns surrounding body image and social media, and but there has yet to be any evidence of causation as research has been almost entirely cross-sectional (in other words, we have yet to see the effects over long periods of time). We don’t know yet if young people who are struggling with their body image are more likely to be drawn to social media sites like Instagram, or if Instagram is actively causing these concerns to arise or get worse. We can’t make a clinical assessment on that until we have enough evidence on the matter. And as there are potential benefits to teenage social media too, a ban for under 16s seems premature based on the available evidence.
Digital Age of Consent
So why is this important? This week, the Dáil voted to change the digital age of consent to 16. Companies now cannot take the data of people under 16 without the consent of their parents. Sounds good on instinct, but there’s a catch — there’s no realistic way to enforce it. This might prompt people to think, who cares? The answer is many experts, actually. The main concern is that people under 16 have no intention to stop using social media and will now lie about their age. Social media sites also have no responsibility now to make sites safer for people under 16. Both of these issues raise huge questions for teenagers being targeted by ads and, more alarmingly, by online predators.
The arguments for age 13 and against age 16 have been laid out very well in this open letter which was signed by many children’s rights advocates, experts and psychologists. It’s well worth a read, and the arguments have been made so many times online that I don’t feel the need to outline them again here. However, there is just one extra example I’d like to give that hasn’t been made yet that I find extremely worrying when it comes to children’s/teenagers’ safety online.
The side who have been advocating for a digital age of consent (DAoC) of 16 have not taken the concept of young people lying about their age online seriously. This is in spite of the fact that many people under 13 already lie about their age online to get onto these sites in the first place (p.42). They also have not been open enough to the possibility of young people faking consent from their parents. It’s quite simple to do, as outlined in the example at the start of this article. However, this is not only a concern because of young people lying, it actively exposes young people to services they are not ready for and may cause them harm.
If a 13-year-old started using Facebook now, while lying that they were 16, then they would be eligible for a Tinder account by the time they are 15. (Tinder is generally accessed through having a Facebook account.) Currently, there is no advantage to people over 13 lying about their age, (thanks to a de facto DAoC of 13 set by social networks). This means that people between 13–15 — the ones who have set up their account with their true age — will move on to age-appropriate sites when they are at the right age. The potential fallout from the DAoC (which will only realistically result in more young people lying about their age) could expose young people to sites like Tinder or other dating sites for people aged 18+ long before it is in any way suitable for them. This could create situations where they are either open to online predators, or even matching with young adults as if they were 18 themselves. Dr. Mary Aiken has rightly raised many concerns regarding child safety online in terms of child sexual abuse materials before. Creating situations in which more teenagers lie about their age leaves them more vulnerable to these issues because they will be expected to be over 18 in inappropriate and potentially dangerous situations online. This is not an option that we want to be encouraging as it could cause huge issues down the line.
As outlined in the open letter, the Data Protection Bill is a data protection law. Changing the DAoC just because it gives parents a false sense of security is not an appropriate way to conduct child safety laws and could actively put more young people at risk. Advocates for 16 have not offered viable ways in which a DAoC can actually be enforced. The PPS number solution, for example, would require handing over more personal data of young people to the companies that we’re currently trying to stop from getting young people’s data. The early example from WhatsApp last week was so easy to exploit that no young person would struggle to make their way around it if they wanted to. Instagram have also put forward an under 18/over 18 question, which advocates for 16 have lapped up as if it’s a game-changer. Realistically, if we had chosen age 13 for a DAoC, then Instagram would have had to take real responsibility for all of its users, instead of now being able to ignore the safety of 13–15 year olds on their site.
Thankfully, one of the other amendments added to the Data Protection Bill relates to a review of the age of consent in 3 year’s time. Hopefully the issues raised here won’t have occurred by then, but this is surely an unnecessary risk to take for an unenforceable law. Ultimately, what this whole debate has shown many of us on the 13 side is that while everyone wants internet safety for young people to improve, toying with the DAoC is the wrong way to go about it. There are a lot of potential moves that could be made in education for example, for both the young people and their parents who are trying their best to keep up with what their kids are up to online. Other aspects of the Data Protection Bill, which made micro-targeting and profiling of children an offence, are welcomed, but will be undermined if more young people lie about their age online.
Kids and teenagers are a lot smarter than we’re giving them credit for. We’re totally underestimating the intelligence of teenagers if we assume that the average teenager can’t make it through simple age questions to use these services. And it’s not always out of malice that they’ll join social media sites by faking permission from their parents, it’s because it’s easy enough and, more importantly, worthwhile enough to do it. Young people are going on social media in droves for a reason. It’s best that we try figure why that is, rather than setting unenforceable age limits in law, encouraging young people to lie, and unnecessarily putting their safety at a greater risk.