Apple Requires Telegram to Filter Child Porn in Chat Channels

Jeremy Malcolm
7 min readFeb 7, 2018

--

The private messaging app Telegram was briefly removed from Apple’s App Store last week. Although no explanation was given at the time, it now transpires that this had been due to the discovery that the app was being misused by users to share child abuse images. The app was returned to the App Store after Telegram was updated with new measures to prevent such images from being shared.

Although details of what these measures were are sparse (I have written to the Telegram developers to seek further details), there is good reason to assume that Telegram is now scanning content posted to channels (public chat groups) against a set of hashes of known child abuse images. The reason why I think that this is what happened is because one-to-one and group conversations in Telegram are encrypted, and so — unless Telegram suddenly introduced a backdoor into its product — it would not be possible for it to scan images sent to such conversations.

I am wary of the use of automated filtering of content, in part due to the high risk of false positives and false negatives. An example of a false positive is where an AI, attempting to identify nudity, may accidentally flag a picture of a desert that looks similar to human skin. An example of a false negative is that systems that work on hashes of known images, such as Microsoft’s PhotoDNA, will (by design) fail to identify newly-produced images, which can create a false sense of security about the effectiveness of the product.

But despite my hesitancy to endorse automatic scanning in general, there is an argument that this it could be an effective and proportionate measure in cases where a file is an exact match of a file already found to be unlawful, provided that avenues remain for users to challenge the takedown of the content. The universally illegal and abhorrent nature of child abuse imagery may justify the use of automatic filtering of such content by cloud platform providers in some cases. But here are some criteria that I think should apply:

1. Criteria for Selecting Images Should be Transparent

When individual Internet platforms carelessly apply their own blanket rules about child nudity, the result can be censorship of material with legitimate historical, artistic, scientific, or cultural value. Examples include Facebook’s censorship of the iconic Vietnam war photo of napalm victim Phan Thi Kim Phuc, and Instagram’s removal of family photos of partially nude toddlers. That’s why many platforms rely on independent professional classification of child abuse imagery, to reduce the likelihood of such mistakes being made.

There are two main databases of child abuse images that I know of being used for automatic scanning by Internet platforms; one contains only the “worst of the worst” such images identified by the National Center for Missing and Exploited Children (NCMEC), and the second contains a broader set of images identified by the UK’s Internet Watch Foundation (IWF), which bases its classification on UK law and sentencing guidelines [PDF]. Many of the NCMEC images are gathered from police investigations of those prosecuted and convicted for possessing them, so at least a large subset of these images is definitively unlawful.

This is not true to the same extent of the IWF list, which there is reason to think may be over-broad. In 2008 British ISPs who subscribed to the IWF list blocked Wikipedia on account of its inclusion of an album cover by the Scorpions that included an image of a naked young girl with her genitals obscured — an image that was certainly tasteless, but would not have qualified for blocking under the “worst of the worst” criteria applied by the NCMEC. The IWF itself rated the image a 1 on a 1 to 5 scale of offensiveness, and it ultimately removed the image from its blacklist.

Due to problems like this, it is best if automatic filtering of images is limited to a “worst of the worst” list of unlawful images, such as those maintained by the NCMEC. This is not to say that platforms can’t remove less offensive images of child nudity that are believed to be unlawful and that are determined to infringe their terms of service. However these more questionable or borderline images should be assessed manually, and users should have the right to have these content moderation decisions reviewed.

2. The Scope of Restriction of Content Should be Limited

In the case of the censorship of the Scorpions album cover that affected Wikipedia, it wasn’t simply the offensive album cover image that was listed on the IWF blocklist, but the page on which that image appeared, including the full text of the article which there were no grounds for blocking. Worse, the way that the blocklist operated meant that Wikipedia was forced to deny anonymous editing rights to 95% of residential British Internet users.

For this reason I suggest that in cases where automated filtering is used to censor serious child abuse imagery, this should only apply to the actual images identified. If a broader scope of removal is required, for example extending to an entire forum in which such images are regularly shared, automatic blocking or filtering is not an appropriate mechanism to achieve this.

3. Users Should be Informed

Telegram is not alone in scanning images against a child pornography database. Other cloud platforms such as Google, Facebook, Twitter and Dropbox do the same thing. But they tend not to be very open about doing so, and Telegram is no exception. In its privacy policy, Telegram asserts “We never share your data with anyone,” and that “All data is stored heavily encrypted.” Its FAQ states, “All Telegram chats and group chats are private amongst their participants. We do not process any requests related to them.”

This is difficult to reconcile with the fact that messages sent to Telegram channels are not encrypted at all. That fact can be obliquely discerned from the Telegram FAQ where it says “channels, and bots on Telegram are publicly available,” but even this does not indicate that automatic scanning of messages will take place. Telegram, and other companies that perform automatic scanning of users’ content, ought to be upfront about this by clearly stating that fact in their terms of service. And Telegram in particular, which touts itself as being a private messaging app, ought to be clearer about the fact that messages sent to channels are not encrypted.

4. No Backdoors

Outside of public channels, as far as we know Telegram has not introduced a backdoor into the other forms of messaging that the app supports, which are ordinary chats (server to client encrypted) and “secret chats” (end-to-end encrypted). But governments around the world, including the United Kingdom, Russia, Australia, and the United States, would like to see messaging app developers doing this, and the use of such apps by child abusers (along with terrorists) is often used to justify these demands. In the interests of the many law-abiding individuals who use encrypted messaging apps to communicate securely, and whose right to do so has been recognized at the United Nations, these demands must be firmly resisted.

There should also be a way for users to verify that there are no backdoors in the messaging apps they use. Although Telegram is an open source project, the last updates to the app’s code repository on Github date back to December 2017. Hopefully Telegram will soon post the latest version of its code, including the changes that were introduced following its removal and reinstatement from the App Store.

5. No Filtering Mandate

Finally although there are circumstances in which applications and platforms are entitled to perform automatic scanning and filtering of content for child abuse material, I don’t think that they should be under a legal obligation to do so. Such a mandate does not exist in the United States, and has been considered and rejected in Europe, but mandatory filtering of such material does exist in some other countries including South Korea. The existence of such a mandate lends weight to the demands of those seeking the imposition of content filtering obligations for other purposes, including the suppression of copyright infringement as Europe is currently considering, and the surveillance of dissidents as in China.

I would go further and say that not only should there be no legal mandate for content filtering, there are also concerns in such mandates being established by major intermediaries such as Apple, as a condition of the acceptance of apps into its App Store. Apple’s removal of the Telegram apps from the App Store on the basis of “illegal content, specifically child pornography, in the apps,” sets a worrying precedent in this regard, since there is no sensible way in which to describe messages sent by users as being contained in the app. If Apple were to apply this policy more broadly, it would create a significant new barriers for any messaging app, and indeed for many other categories of app that allow access to content created or transmitted by users.

Does this mean that we should accept the ongoing sharing of child pornography over encrypted communications apps? Absolutely not. But there are other ways to deal with this scourge aside from weakening encryption. For example, child protection groups such as Stop It Now (from the United Kingdom), Project Dunkelfeld (from Germany), and the Prevention Project (from the USA) focus on the demand side of the equation, assisting pedophiles to obtain the support that they need to avoid using child pornography, and otherwise to lead non-offending lives.

In comparison, the establishment of a norm of automatic filtering of child pornography in Internet applications and services would give us the worst of all possible worlds — overblocking of legal content, underblocking of unlawful content, and the establishment of an infrastructure for censorship that could be redeployed against protestors, dissidents, and ordinary law-abiding citizens.

--

--