How Artificial Intelligence Can Help Protect Children

Rustam Aliyev
Purify Foundation
Published in
9 min readNov 26, 2018

Family, school, street. Traditionally these three social concepts have had the most influence on the development of children and adolescents. But times are changing rapidly. The internet has long since become the fourth major influence, with its impact growing immensely over the past decade.

While parents can choose the school that their children attend, and oversee interaction on the streets, there is no doubt that the internet is much harder to supervise.

When we think about the internet and children, many of us would immediately consider the danger of exposure to pornography. But the risks of online activity can be far deeper and more diverse than this.

Stark Statistics

The statistics on this matter are quite stark. For example, 53% of 11–16-year olds in the UK have seen explicit material online. Sexting has become a more common practice among young people, with 14.8% and 27.4% of youths sending and receiving sexts, respectively.

And the 2018 NSPCC survey in the UK contained some alarming findings. On average, one child per primary school class has been sent or shown a naked or semi-naked image online from an adult. While 1 in 50 school children surveyed sent a nude or semi-nude image to an adult.

Children sending and receiving sexual messages, NSPCC and London Grid for Learning DigiSafe (2018)

Recent studies have shown that technology has been a major enabler of child abuse. Notably, live-streaming and sexting provide abusers with the ability to coerce children into extreme forms of abuse.

One prominent example of this came to a head in February of this year. Cambridge University graduate Dr. Matthew Falder pleaded guilty to 137 offences, including a multitude aimed at minors. In the process, Falder was sentenced to 32 years in prison; the case underlining the scale and scope of the online threats that children potentially face.

In this context, the IWF is now warning the public about the danger of allowing children unrestricted access to webcams and mobile phone cameras, deeming this to be a severe threat to children.

Yet despite the evidence that this is a critical safety issue for children, many adults remain sadly ignorant of the level of threat that online abuse entails. And many parents are unaware of these new dangers.

Addressing Abuse

In an attempt to address the situation, governments in developed countries are looking to regulate safeguarding standards for ISPs, social media firms, and adult content providers. It is hoped that this will help create a more secure environment.

For example, in the UK the government is trying to enforce age verification on porn websites. Ironically, the only available age verification service was developed by MindGeek – a company which owns major porn websites such as Pornhub, RedTube and YouPorn.

On the one hand, this will partially restrict children from accessing popular porn websites. But on the other hand, will give MindGeek control to the private information of 20 million citizens and their online activity. Probably not a situation that most people would consider ideal given MindGeek’s track record of data breaches!

Meanwhile, in the US a proposal has been made to force device manufacturers to pre-install filters that block web pages containing sexual content. The bill, which was later dropped, required users to pay a $20 fee per device in order to remove the filters.

Advances in AI and mobile hardware provide us with an unprecedented opportunity to build intelligent child-safeguarding solutions running on-device.

However, all of these filtering solutions, whether merely proposed or actually available, are based on the same naïve principle — restricting access to known harmful websites. Considering the severity and complexity of the problem that is just not enough.

But there is so much more that can be done. Although the technology field undoubtedly poses many problems in this area, it can also be a major part of the solution.

Advances in AI and mobile hardware provide us with an unprecedented opportunity to build intelligent child-safeguarding solutions running on-device. Meanwhile, the highest possible level of privacy can also be incorporated, reassuring users who are rightly concerned about this issue.

Fragile Filters

Existing web filtering tools are known to be ineffective in dealing with harmful content. Increasing adoption of HTTPS and other forms of encryption makes traditional filtering insufficient.

Percentage of pages loaded over HTTPS in Chrome, Google Transparency Report

In this contemporary internet, filtering tools rely on domain blacklists, which are consulted during any request. This implies that restrictions can only be applied at the domain level. While filtering blacklists are updated periodically, this does not happen in real-time, and inevitably leads to under-blocking.

Another real problem with supervising the behaviour of children online is that many young people are considerably more tech-savvy than their parents. This means that they can easily bypass what are often crude and unsophisticated attempts to monitor and censure their conduct.

Instructions to bypass these filters are available online plentifully, whether it is using VPN service, the TOR browser (which also provides access to the notorious dark web), proxy server, DNSCrypt, or simply modifying DNS settings.

And, increasingly, explicit content is being shared over messaging and live-streaming apps. Facebook, Snapchat, Instagram, Discord, TikTok, Periscope and many other general-purpose services are usually underestimated and left unblocked.

Safety cannot be guaranteed even with the internet access being cut off altogether. Firechat and similar mesh network chat apps allow strangers to interact and share photos without any data connectivity. There are also reported cases where people received explicit images via Apple AirDrop from strangers. Although these apps, enabled by the mix of Bluetooth and peer-to-peer Wi-Fi technologies, usually allow interaction within short range.

AI Advances

Since 2012, image classification algorithms based on deep learning have continually improved accuracy. This led to the adoption in a wide range of real-world problems from retail and automotive to healthcare and agriculture.

Analysis of deep learning architectures (2018)

We also have seen an emergence of mobile optimised architectures such as SqueezeNet and MobileNetV2 which allows efficient on-device classification and detection in real-time with high accuracy.

Researchers have been trying to tackle automatic detection of explicit images for decades. And the new neural network architectures have achieved progress in this field as well. In 2016, Yahoo released Open NSFW – a pre-trained model for detecting pornographic images based on ResNet50 architecture.

Another project with a similar goal provides pre-trained models using SqueezeNet architecture for faster classification. It takes only 10ms to classify an image using this SqueezeNet model on a modern CPU.

Deep Learning Solution for Detecting NSFW Images, Yahoo! Engineering (2016)

Mobile hardware has also been evolving in parallel with neural networks. Many mobile devices are now being fitted with dedicated and powerful AI chips.

Apple first introduced a dedicated neural network accelerator called Neural Engine in the A11 chipset, which powers the iPhone X. This chip is capable of performing 600 billion operations per second. With the new A12 chip, which has been designed for iPhone XS and iPad Pro, Apple boosted its Neural Engine to 5 trillion operations per second; a 9-fold increase.

Android hardware suppliers have also been working on their own AI accelerator chips. Qualcomm Hexagon, Huawei NPU and MediaTek NeuroPilot have made their way into a number of Android devices, to boost on-device neural networks.

Blocking and Inference

By utilising these resources, it should be feasible to perform inference when images are loaded for rendering, thus blocking harmful images before they are displayed.

The implementation approach can be similar to on-access virus scanning, where the virus scanner continually examines the device and automatically activates each time an image is accessed by a program. An alternative implementation can work through certification, where applications certified as “child safe” must use a set of APIs provided by the operating system.

Existing parental control mechanisms in iOS, Android and Windows can then be extended to allow parents to control on-device filtering.

This on-device AI approach to filtering also has a greater level of privacy. Since detection happens on the device itself, no browsing history has to be analysed by a third party, and no personal information must therefore be shared with a centralised entity for age verification purposes. Effectively, the whole solution is fully decentralised.

Protection Beyond Explicit Images

Video analysis is another potential source of defence against online abuse. This can be a more tedious and onerous process, but it can still be achieved.

Most online video content will have a frame rate between 24 and 30 frames per second (fps), with some videos going up to 60fps. Therefore, the simple and effective solution would be to analyse frames in real-time with a chosen sampling — for example, 1 fps. This approach gives device 200–400ms (@1fps) to analyse a sample frame, detect, and prevent unwanted scenes from being displayed.

A few video frames may skip before an explicit scene is detected, but this method is still better than nothing, as AI continues to develop and be satisfactorily implemented.

Demo: Detecting explicit video content in real-time

More advanced architectures for explicit video detection can also incorporate motion data to improve accuracy.

Video pornography detection through deep learning and motion information. Perez, Neurocomputing (2016)

Camera input is another crucial element where content filtering has to happen. Whether it is texting (sexting) or live-streaming, parents should be able to restrict device from transmitting harmful visual content. The techniques described above for video filtering are fully applicable to cameras.

It is also important to note that harmful content goes way beyond pornography and explicit images. Cyberbullying is a significant issue, and this can be conducted via chatrooms, social media, and a wide variety of other communication sources. Indeed, very recently Prince William chose to speak out on the dangers of cyberbullying on social media, during a trip to the BBC.

Although this topic deserves a separate post on its own, we need to quickly mention that social and gaming platforms have been applying AI based automatic moderation of content for quite some time. And there is nothing to stop us from adopting similar AI models and moderation within the device itself.

Limitations

There are some known problems and limitations associated with such systems as well. Firstly, we could see adversarial attacks targeting such systems, with the intention of disabling them. In this case, that entails an explicit image being imperceptibly modified to bypass an on-device blocker. Although a number of effective defence mechanisms have been already developed, and will evolve in the coming years.

Example adversarial attack from Explaining and Harnessing Adversarial Examples

Social aspects of such blocking have to be studied carefully. In some cases, blocking victims’ access may only exacerbate the situation.

And it’s also important to acknowledge that not all mobile devices are capable of handling these workloads. Disadvantaged children and children in poorer regions of the world might not benefit from this protection immediately.

Other minor problems with initiating such systems can include reduced battery life, along with their impact on the performance of mobile devices. But these are minor teething troubles that will undoubtedly dissipate over time, as AI and related technology becomes more sophisticated and ingrained in the technology landscape.

Conclusion

The director of vulnerabilities at the National Crime Agency (NCA), Will Kerr, recently told the UK parliament that “there are thousands of children being unnecessarily exploited and abused because the tech sector has significant responsibility and the ability to top far more [abuse] at source”. It is clear that the main “source” is the device itself and today’s high-end devices have the capacity to protect children.

Such child protection systems would need to integrate deeply with devices’ operating systems. Therefore, it will be difficult to implement any such protection without close involvement from tech giants. Apple, Google and Microsoft provide 97% of all devices worldwide with their operating systems, and must be engaged with this process. It is also obvious these companies have a moral obligation to explore this, or similar solutions, as a matter of priority.

The statistics suggest, sadly, that this struggle will continue indefinitely. But today we can already build the technology to provide parents with better control mechanisms over the fourth major influence — the internet.

--

--