Children and AI: ‘We have to think very carefully about the ethical boundaries’

Stuart Dredge
ContempoPlay
Published in
7 min readJul 4, 2018
Maddie Moate, Elena Sinel and Lee Allen

Artificial intelligence is constantly in the news in 2018, for better or worse. A pair of panel sessions at the Children’s Media Conference in Sheffield today explored what AI and other new technologies mean for kids.

The first panel — ‘It’s Alive!’ — took a broad look at AI and children, with speakers including Elena Sinel from TeensInAI, which works with young people to explore AI; Lydia Gregory from tech startup FeedForward AI; Lee Allen from digital agency TH_NK; and Martyn Farrows from Soapbox Labs, which makes voice-recognition technology for children. The moderator was TV presenter and educational YouTuber Maddie Moate.

Allen noted that AI development has been gaining in pace since 2011/12, even if the technologies have been around a lot longer. It is technologies in the plural too: computer vision, speech recognition and natural language, for example, among others. Gregory and Allen agreed that there are lots of opportunities to do interesting things with AI around children’s entertainment – for example services like Netflix making sense of all the data they have on TV shows and films and on the preferences of each viewer, to make smarter recommendations.

Should AI be used with children though? “We understand that AI is not witchcraft: there’s no magic going on here. It’s all data-driven,” said Farrows. “The issue there is that not all data is created equally. When we’re talking about data, particularly for kids, a lot of that data will be personal data. We have to think very carefully about the ethical boundaries of how we process that data.”

Smart speakers like Amazon’s Echo and Google Home are examples of devices that record people’s voice commands – children included. “The ethical boundaries of what happens to their data are really interesting, in that context,” he said. “We’re creating a digital footprint for children with voice. It doesn’t matter if it’s Alexa or Google Home… we’re interpreting that information and then targeting them potentially with ads that are suited to those preferences… There’s ethical boundaries that mean we really have to think about what we do with that data, once it’s collected.”

Moate gave the example of a child asking Alexa a question about Disney’s Frozen, and the risk of then being served up more stuff relating to that in the future — having been categorised as a Disney fan — rather than something else that they haven’t asked for yet, but might be interested in.

Gregory talked about the importance to know what you’re talking about with AI. “AI is a philosophical concept that is centuries old: how do we make something that we would recognise as having some kind of human intelligence?” she said, before pointing out that a lot of what people talk about as AI now is ‘machine-learning’ (that crunching of enormous amounts of data, and teaching a system to make sense of it) — whether that’s Alexa acting on voice commands, or Netflix suggesting new shows to watch. So talk of humanoid robots that might take over the world is “unhelpful”.

Lydia Gregory and Martyn Farrows

Sometimes this kind of AI can go wrong though: Farrows showed an infamous video of a child asking Alexa for something, being misunderstood as asking for porn content, and the speaker starting to talk about sex toys before the panicked parents told it to stop.

“We’re not at the conversational stage yet. It’s very command-based: Instructing Alexa to do something,” said Allen. “I think we’re going to get into a very interesting place where it will be more than just voice. It will be able to recognise facial expressions and other context.”

Sinel talked about politeness. “When we instruct Alexa to do x, y and z, we are teaching kids to instruct, and not to communicate. I question the values of those who develop those algorithms in relation to kids, and how they relate to the values of my family – when I teach my kids not to instruct, but to say please and thank you!”

“We’re very early on a journey,” added Farrows.

“The thing that worries me is that we end up conflating machine-learning and AI with the products that those companies [Amazon and Google] have produced. And actually the technology should be more widely applicable,” said Gregory. She’s keen for other kinds of companies to get access to the tech and data-sets used to train it.

She also dealt with a question about whether AI will replace human creators in the children’s media world, based on a recent project where AI was used to create a brand new Brothers Grimm fairytale. “The exciting thing is that there are lots of opportunities for that to be a very engaging and educational experience: to get children to engage with those stories and write their own,” she said. “These creative tools, there’s a lot more opportunity for them being educational. And even in music, having ideas-generators for children… AI can be involved in the creative workflow, but it’s doing it in a different way.”

Allen added his views. “There’s always going to be that need for human and machine collaboration,” he said. An algorithm may be able (for example) to put a video highlights reel together, but a human then needs to check it to see if it’s good. Gregory also suggested that AI will play an important role online, in spotting nudity, swearwords and other inappropriate content within videos and photos that might be seen by children – for example on YouTube.

“It’s not only children that we need to educate. It’s parents, society, the governments,” added Sinel. TeensInAI gets 12–18 year-olds to take part in AI hackathons, including talking about the ethics and building their own projects. “We discuss all the implications, particularly the ethical implications.” She noted that a key problem with many of the current examples of AI are that they were developed by a fairly narrow range of people – white men in the technology industry – something TeensInAI is actively trying to change.

Maddie Moate and Elena Sinel

Apparently, one of the most common bits of feedback from its hackathon participants is this: “Why can’t school be like hackathons, where we learn to solve problems and where we lead the process?”. And that includes some important problems: one recent TeensInAI hackathon focused on mental health, for example.

“It’s about agency: about equipping them with that really powerful ability to solve real problems. We show that age is really not a barrier. They could be 12, they could be 15, but they can still solve a real problem, whether it’s climate change, mental health or equality,” said Sinel.

The conversation returned to AI-powered children’s toys: are these dangerous if younger kids find it hard to distinguish between the machine and humans? Farrows suggested that children do learn the boundaries, as part of their developmental processes, but that the companies making (for example) virtual characters or Alexa skills for kids have to be responsible.

“We have to decide what are the ethical boundaries that we are comfortable in terms of designing experiences for kids using this technology. Nobody else is going to do that for us,” he said. Allen agreed. “You build trust through conversation. Spoken word can connect to emotions, and what we’re seeing outside of the children is actually companionship. People being able to talk to these different devices, and building a relationship… which can be inclusive. But it is about a collective responsibility.”

Sinel said that she’s uncomfortable with AI-driven children’s toys if she doesn’t know how the data on children’s interactions with them are stored – one toy called My Friend Cayla was banned in Germany already on these grounds.

Gregory said that one way to think about it as an industry and the way to think about it as a parent may be different, citing a quote from Harry Potter. “Never trust anything where you can’t see where it keeps its brain… I would teach my children to be sceptical!” Which doesn’t mean banning them from using AI-powered devices or assistants, but rather encouraging them not to blindly trust them.

One audience member asked about how soon we’ll have properly conversational AI. Farrows said it’s already happening – for example with chatbots – but just not quite yet with voice. “It’s not that it’s coming: we’re already there,” he said. And Gregory finished off by calling, again, for the data-sets behind all this to be opened up to more companies than just the Amazons and Googles of the world.

Disclosure: I’m on the advisory committee for the Children’s Media Conference, and executive-produced the ‘It’s Alive!’ panel — although the hard work on actually putting it together was done by session producer Lucy Gill.

--

--

Stuart Dredge
ContempoPlay

Scribbler about apps, digital music, games and consumer technology. Skills: slouching, typing fast. Usually simultaneously.