Can We Realise Children’s Rights In A Digital World?
A provocation paper
Professor Sonia Livingstone FBA, London School of Economics
Thirty years ago the UN Convention on the Rights of the Child (UNCRC, 1989) cast a global spotlight on what societies should do to make rights a reality for all children. The Convention sets out how human rights (to life and liberty, identity, freedom of expression and assembly, protection, non-discrimination, privacy, education and more) apply to children. It also emphasises specific child rights — to develop to their fullest potential, to play, to support according to their evolving capacity and best interests and to be heard by decision-makers in matters that affect them.
However, just as the vision and task of implementing the UNCRC mobilised child welfare organisations and rights advocates around the world, much else was changing too. 1989 was an eventful year, and among other things, it saw the invention of the World Wide Web, radically reconfiguring the conditions of children’s lives.[i]
Initial enthusiasm about the World Wide Web — information at our fingertips, everyone connected, unlimited opportunities for expression — seemed to include and even celebrate children as so-called “digital natives.”[ii] But increasingly, these two developments — an authoritative assertion of children’s rights, and children’s participation in what has fast become a digital world — seem set on a collision course.
This is salient in the array of online risks of harm to children — commercial exploitation, cyberbullying, exposure to extreme pornography, hate or self-harm materials, and the image-sharing and live-streaming of child sexual abuse.[iii] Also problematic, though attracting less attention, are the missed opportunities for children to learn, create and participate in an increasingly digital world. For even if children have access to the internet, this does not automatically translate into gaining the benefits of the digital world.
A child rights approach starts from the positive assertion that children are rights holders here and now. This view underpins the Convention, itself ratified by every country in the world bar the USA. In 2014, the UN Committee on the Rights of the Child clarified that the full range of child rights apply online as they do offline.[iv] However, it is turning out to be a considerable challenge to realise child rights in the context of a fast-innovating, highly commercialised and globalised digital environment. One reason is that, although children are fully one in three of the world’s internet users,[v] both individually and collectively they are invisible in the digital environment.
Imagine a child trying to enter a sex shop. The shop assistant easily identifies them as a child — and has a trusted and legally-valid procedure for checking age if in doubt. The town planner licencing the shop may have rules not to situate it next to a school. Passers-by are likely to notice who goes in and may even intervene if they see a child entering, perhaps contacting their parents or reporting the shop to the authorities.
Online, this is difficult if not impossible. Children routinely occupy digital spaces in which both general and specifically ‘adult’ activities take place — sex, gambling, hate, aggression, self-harm, sale of inappropriate products. These spaces are often household names — Google, Instagram, Amazon, Flickr, eBay, Twitter, etc. But the platforms claim that they cannot tell who is a child (since many users are anonymous online or disguise their identity) and, thus, cannot treat them according to their evolving capacity or best interests. Meanwhile the state is struggling to regulate the platforms — often extremely powerful transnational organisations.
In practice, it seems, platforms must either treat all users (including children) as if they are adults (the current norm) or they must treat everyone as if they were a child (a series of failed regulatory efforts testifies to the problems with this approach).
While this regulatory conundrum is taxing many clever minds, also worrying are the changing social norms that lead some to shrug their shoulders at a supposedly lost cause — the genie is out of the bottle, they say. To pursue my analogy, adult passers-by in the digital environment find themselves bystanders to vast amounts of inappropriate behaviour which, accepted civilised standards dictate, are not appropriate for children (just type “porn” into Twitter). But they, too, can’t identify a child online (let alone his or parent) and they may fear to get involved. So they — we — begin to take their own inaction for granted and to consider the online environment ‘the new normal’.
Not only are children left to their own devices in a space conceived of as for adults, but also the digital environment is designed to amplify certain actions in accordance not with the best interests of the child but with commercial interests. Algorithms are optimised to recommend ever more extreme contents — whether stereotyped visions of perfect faces and perfect lives or, if you ever show an interest, escalating images of self-harm, violence or hate. Again, social norms lag behind technological developments. Mental health clinics rarely ask their child clients whether or how their difficulties are manifest online. Teachers are overwhelmed by society’s expectation that they should deal with online risks along with everything else. Parents realise they don’t know enough about their child’s online life, but they don’t know what to do.
Managing the (unmanageable) digital world
My concern here is not whether or not children accessing pornography is harmful, for the evidence is still contested. Rather, I am concerned that society is at a loss as to how to act regarding the digital environment. Should we wait for more and better evidence, or should the ‘precautionary principle’ prevail? Is the risk of harm sufficient to demand immediate attention? Or, should society instead prioritise the missed opportunities by better facilitating children’s civil rights and freedoms online? Who, even, has the authority and legitimacy to take such action as is needed?
In response to the rising tide of anxiety and distrust replacing once-exciting predictions of a digital future, and to promote the realisation of children’s rights online and to prevent their violation, companies point to parents, parents point to government, and government points to companies to take responsibility. At the same time, some claim that no specific actions are needed at all. This is because children don’t distinguish any more between offline and online — so the rights of the child (and the laws and institutions which underpin them) apply in the digital world just as they did in the pre-digital world. These claims are true up to a point, helping to contradict lingering assumptions that the digital is only ‘virtual’ and thus immaterial in its consequences. But it would be more accurate to say that there are many different and distinct ways in which the online and offline are becoming entwined.
We should, precisely, investigate not underestimate the emerging and complex interdependencies and partnerships among social, regulatory and institutional practices on the one hand, and digital technologies (including platforms, networks, services and contents) on the other.[vi] These are, many researchers agree, amplifying and intensifying both the risks and the opportunities, with diversified consequences depending on the context. I am here reminded of a famous saying by Wilbur Schramm in the early years of television:
“For some children, under some conditions, some television is harmful. For some children under the same conditions, or for the same children under other conditions, it may be beneficial. For most children, under most conditions, most television is probably neither particularly harmful nor particularly beneficial” (Schramm, Lyle, & Parker, Television in the lives of our children, 1961: 61).
If we replace ‘television’ with ‘internet’, a fair summary of available empirical research emerges. But this does not mandate a major policy intervention. Clearly, much depends on the degree of amplification and intensification that the internet introduces into children’s lives, bearing in mind that television was, from the start, highly regulated, certainly by contrast with the early ‘Wild West’ days of the internet.
Just as social scientists reached this sensible if hardly dramatic conclusion, the game has again changed. Compare, now, children playing in the street, as they have always done, with children playing a multiplayer game. Not only is it unclear which players are children, but there’s a further problem. That multiplayer game is likely owned by a multinational corporate which does not make itself accountable to any particular local authority, and which provides no access to information about who plays, or what happens to the children it hosts, or what action is taken when a problem is reported. It is hard reliably to discern which apps bring benefits for learning or sociability, or those which pose new risks.
The old model of play was, in effect, free. But to replace today’s model will take new public funds. Moreover, although the online model of play may appear free, online there’s no such thing as ‘free.’ Today’s internet user pays with their personal data. Shoshana Zuboff likens today’s datafication of our lives — the monetisation of our play, actions, emotions and interactions to the late medieval enclosure of the commons, the privatisation of what once belonged to everyone, the sacrifice of the public good to individual gain.
We are moving from a stage of online invisibility to one of hypervisibility. From a past in which children’s lives were largely unobserved by outsiders (because they were lived in private spaces and ignored when in public ones) to a world in which their every move is observed, recorded, tracked, profiled, targeted, nudged — by powerful digital actors (corporate and state, national and transnational, human and artificial).[vii] The consequences of such a transformation are unknown.
So the problem of not knowing who is a child online is about to be displaced by a yet more challenging problem — the emergence of a digital panopticon, an all-seeing, all-knowing digital environment which knows exactly who is a child, how they live and what they want. No wonder that calls to identify children online in order to empower and protect them are giving way to calls from privacy advocates precisely not to identify them. For privacy and autonomy appear newly under threat under “surveillance capitalism,” [viii] and technological solutions may be more privacy-invasive than the problems they’re designed to solve.
In the absence, as yet, of a mature, nuanced and trusted regulatory settlement, policy makers are facing some seemingly stark choices regarding the promotion of children’s rights in a digital world.
- Should they enable children’s participation in the digital world, along with everyone else? Or should they try to minimise risks by restricting them to child-only or even offline-only spaces? Or (as one would hope) can a better solution be found?
- Should they pay from the public purse for online provision of content and services beneficial to children (as it does offline — think of parks, schools, libraries, youth clubs, public service media)? Or should they accept the commercialisation of children’s lives as inevitable? The recent decision by YouTube, under pressure from the US Federal Trade Commission, not to monetise children’s content is surely promising.
- Should they insist on the identification of children (and/or adults) online so as to protect them better? Or is this too privacy-invasive and risky, likely to undermine children’s freedom of expression?
- Should they hold parents responsible for their child’s online well-being? Research is clear that many parents are unequal to the task, and that those least able to bear the burden are precisely those whose children are most at risk, while also the hardest to reach with advice. Moreover, parents’ protective efforts often come at the cost of the child’s privacy and freedom.
- Should they hold industry responsible for the well-being of children who use their services? Public opinion is clearly swinging in favour of industry bearing greater responsibility. But trust is also falling, and few wish to task industry with, say, our children’s digital literacy education, or with children’s safety. Can they be made to pay, so that those with the appropriate expertise can provide what children need?
What can be done?
The binary nature of these dilemmas surely points to an immature context for realising children’s rights. Thirty years is not many in which to come to terms with the digital revolution, especially as the pace of innovation is hardly slowing. In the offline world, societies have spent decades, centuries even, evolving a mix of design, regulation and social norms. We’ve only just begun that process in relation to the digital environment.
Recalling that states are investing hugely in digital technology to compete economically, so there are clearly resources available if there is the political will, society should now urgently call on all relevant bodies to take action. Some actions are fairly straightforward, though they will require political will and public funds:
1. Invest in education to teach children and parents/caregivers the critical knowledge and skills they need to operate as agents and rights holders in relation to the digital environment.
2. Ensure that state actions regarding the digital environment are underpinned by the meaningful participation of children wherever the consequences will affect them.
3. Empower young people to take responsibility where they can, for example by training and resourcing young ambassadors and peer mentors to support and help others in digital spaces.
4. Build expertise in digital matters into all state provision for children, including training the children’s workforce (teachers, clinicians, social workers, health visitors, etc.) regarding digital risks and opportunities.
Some actions are proving more controversial, because they require a pro-rights approach from government, because they demand action from those who think children’s needs are nothing to do with them, and because they require a new approach to the profitability of digital businesses:
5. Require child-rights impact assessments before digital innovations are developed, to inform their design and deployment, including training engineers, computer scientists and technologists in ethics and child rights.
6. Keep children’s formal education and their health provision free from commercial interests, and ensure they can access free and unmonitored spaces for play, autonomous action and development.
7. Enforce high standards of data protection, and scrutinise public and third sector data-sharing partnerships with commercial actors with a view to children’s best interests.
8. Apply laws against discrimination to organisations that use algorithmic decision making (work places, universities, insurance companies, etc.) to eliminate bias and ensure accountability and redress.
9. Ensure state funding to protect children online, while also ensuring that such protections do not violate children’s rights to freedom of expression, information, privacy and participation.
The British Academy has undertaken a programme of work that seeks to re-frame debates around childhood in both the public and policy spaces and break down academic, policy and professional silos in order to explore new conceptualisations of children in policymaking. Find out more about the Childhood Policy Programme.
[i] Livingstone, S., & Bulger, M. (2014). A global research agenda for children’s rights in the digital age. Journal of Children and Media, 8(4), 317–335.
[ii] Originally proposed by Marc Prensky, this concept since been critiqued for its simplistic opposition between adults and children, its overestimation of children’s expertise and its consequent tendency to undermine efforts to support them. Helsper, E., & Eynon, R. (2010). Digital natives: Where is the evidence? British Educational Research Journal, 36(3), 502–520.
[iii] Livingstone, S. (2019) Are the kids alright? Intermedia, 47(3): 10–14. Retrieved from https://www.iicom.org/intermedia/intermedia-oct-2019/are-the-kids-alright; UNICEF (2017) State of the World’s Children: Children in a digital world. New York: UNICEF.
[iv] OHCHR (2014), Committee on the Rights of the Child: Report of the 2014 Day of General Discussion ‘Digital Media and Children’s Rights’. Para 85, at <www.ohchr.org/Documents/HRBodies/CRC/Discussions/2014/DGD_report.pdf>
[v] Livingstone, S., Carr, J., and Byrne, J. (2015) One in three: The task for global internet governance in addressing children’s rights. Global Commission on Internet Governance: Paper Series. London: CIGI and Chatham House. Retrieved from https://www.cigionline.org/publications/one-three-internet-governance-and-childrens-rights
[vi] The digital — itself not easy to define since innovation is continual — converges the once-distinct phenomena of mass media, computing and information systems, resulting in networked media, user-generated content, smart devices and environments, data analytics, artificial intelligence, virtual reality and more. Lievrouw, L., and Livingstone, S. (2009) Introduction. In L. Lievrouw and S. Livingstone (Eds.), New Media. Sage Benchmarks in Communication (xxi-xl). London: Sage. Retrieved from http://eprints.lse.ac.uk/27104/
[vii] Lupton, D., & Williamson, B. (2017). The datafied child: The dataveillance of children and implications for their rights. New Media & Society, 19(5), 780–794.
[viii] Zuboff, S. (2019) The age of surveillance capitalism. London: Profile Books.