Balancing Child Protection and Digital Rights

Image credit: Two Prostitutes by Fung Chin Pang

Protecting children from physical or sexual abuse is one objective that has just about universal agreement across society. But because the public is predisposed to strongly support laws and policies that aim to protect children, such measures tend not to undergo as much scrutiny as they should. The result is that child protection has been used as a pretext for poorly thought-through policy proposals, by both governments and private companies, with the potential to decimate our digital rights.

A current example is the U.S. Senate’s proposed Stop Enabling Sex Traffickers Act (“SESTA”) and its companion House bill, which would chip away at the safe harbor protections for the platforms that form a vital foundation of our open and innovative Internet. I am not an expert on sex trafficking and can’t say what the solution to this horrible problem may be (though those who are experts in this topic argue that there are many inaccurate statistics about it, and that changing safe harbor protections would do more harm to victims than good).

What can be said, however, is that SESTA will cause significant harm to those who have nothing to do with sex trafficking, impacting free speech and innovation across the Internet. And this is only the latest in a long line of measures taken in the name of child protection by both governments and private companies that actually have much more sweeping ramifications for users. So much so, that it’s worth taking a bigger picture look at how child protection and digital rights often seem to be at odds, and why they don’t have to be.

Criminal Justice

Passing and enforcing laws to protect children from abuse is a legitimate and important function of government. Those who use the Internet to perpetrate crimes against children such as the production and distribution of child pornography deserve to be punished and to forfeit some of their rights at the prison gates.

But as a matter of principle, those accused of such crimes also deserve the same due process protections as any other criminal defendants, and this hasn’t always been the case. A recent example is the FBI’s misuse of a single warrant as authority to hack into the computers of over 1000 users of the Tor browser who had been suspected of accessing a child pornography website. The FBI exploited a vulnerability that placed many innocent Tor users at risk.

Laws criminalizing child abuse are often also overbroad. For example, The Protecting Against Child Exploitation Act of 2017 that passed the House of Representatives in May, although targeted at producers of child pornography, would also criminalize “sexting” between teens, slamming them with a mandatory minimum prison term of 15 years and lifetime registration as sex offenders.

Once listed as registered sex offenders, other laws impose unreasonable conditions on convicts who have paid their debt to society, including by limiting their ability to use the Internet. A North Carolina law that sought to bar registered sex offenders from using social media, was struck down by the Supreme Court earlier this year as overbroad assault on the ability of Americans to use the Internet for legitimate purposes. However an Illinois law that required registered sex offenders to report all their Internet activity to the government was affirmed on appeal.

Virtual Child Porn

When it comes to censorship, despite believing in freedom of expression, I agree that videos and images of children undergoing sexual abuse should be removed at the source, and that all countries should have laws to enforce Internet companies’ compliance with removal requests, where it is technically possible to do so without also removing lawful speech.

But even here, there are grey areas, including the case of “virtual” child porn, which doesn’t harm any real child in its production. In the 2002 case of Ashcroft v. Free Speech Coalition, the Supreme Court struck down a law that treated sexual images of virtual minors as child pornography. However, it remains possible for possession of such images to be prosecuted under state obscenity laws, for example in the 2010 prosecution of a collector of Japanese manga (comic books). In 2001, another U.S. man was convicted under a state obscenity statute merely for writing sexually explicit stories about children, even though he never showed them to anyone else.

With developments in the fields of robotics, virtual reality (VR) and artificial intelligence (AI), the permissible bounds of fantasy are about to get a whole lot more complicated. Earlier this year a paper by the Foundation for Responsible Robotics on Our Sexual Future With Robots took an early stand against the use of “underage” sex robots by pedophiles, a stand questioned by sexologists who argue that more evidence is needed about whether the use of such virtual representations of children would lead to real-life offending, or could have the opposite effect.

Even though this debate is only just getting started, people are already getting persecuted. A man who ordered a more primitive version of an underage robot for his own use has been jailed in the United Kingdom, and a Canadian stands charged of the same offence, on the theory that a childlike sex doll is a form of child pornography. For most of us, envisioning the use of sex robots or virtual reality environments by pedophiles is difficult to stomach. But if might help to divert them from harming an actual child, should their use of these technologies be banned?

If they should be, what other uses of technology to perpetrate “virtual” offenses should be criminalized? Should online sexual “age play” between consenting adults who merely role play as children, or discussions of sexual fantasies involving children, also be criminal offenses? While we haven’t yet reached this point in the United States, in 2012 a man in England was indeed convicted for the thoughtcrime of conducting a private fantasy online chat with other adults online.

Private Censorship

I readily accept that such chats and discussions are a form of content that many find disgusting and abhorrent, and that Internet intermediaries should be entitled to disallow it from being hosted on their platforms. The terms of service of many mainstream online platforms like Reddit and Tumblr allow users to share lawful content that relates to children and sex, in either a fictional or non-fictional context; for example discussion of the book Lolita, and sexual fan fiction set in the universe of Harry Potter. I don’t take the position that all platforms should allow such content; it’s really a matter for them to decide, and there would be nothing wrong with a platform deciding to provide an online space where such content is not allowed.

However for platforms that do allow the discussion of such topics, they ought to comply with the Manila Principles on Intermediary Liability, which includes setting out their policies clearly, and giving users a hearing before their content is taken down. Because of the stigma surrounding the use of the Internet by pedophiles and the conflation of their condition with active child abuse, this doesn’t always happen.

For example, earlier this year an admitted pedophile with the pseudonym Ender Wiggin had his Twitter account suspended and finally terminated, along with its history of 30 thousand tweets and over 500 followers. This would be understandable if he had used his account to promote child abuse, but he used it for exactly the opposite purpose — to argue that those who have an unchosen and unwanted sexual attraction to children can and should be able to avoid offending. Unable to obtain an explanation of what Twitter policy he had infringed, Mr. Wiggin has since created a new account, which remains online as of now.

Until now, a good solution for those wishing to engage in unpopular speech, without the risk of private censorship by a major platform, has been to host that speech themselves. Thus, an impetus for the creation of the independently-owned fan fiction site Archive Of Our Own (AO3) was the earlier blogging site LiveJournal’s 2007 mass purge of accounts dealing with underage sex and incest. (Importantly, AO3 also tags such content so that those who don’t want to come across it can easily avoid it.) Similarly, non-offending pedophiles such as Mr. Wiggin have a website called Virtuous Pedophiles aimed at helping its members to avoid ever committing abuse or accessing child pornography. Certainly, a lot of people would like to see this website disappear off the Internet, just as they would like to see pedophiles (even non-offending ones) disappear from the face of the Earth. But from its formation in 2012 until now, the site has remained secure against legal and technical attack.

On the other hand, until last week, the same was true of the Daily Stormer, the neo-Nazi website whose domain registrars and content delivery network finally bent to public pressure and terminated their services, resulting in the site’s banishment from the open web. Virtuous Pedophiles uses the same content delivery network, Cloudflare, that the Daily Stormer did to protect it against Distributed Denial of Service (DDoS) attacks. In the wake of Cloudflare’s reversal of its position on hosting the Daily Stormer, the company now stands weakened against future calls to deny service to other sites on the basis of their content.

Such third-party pressures have also resulted in the removal from the web of other legal content related to underage sex. For example, earlier this year the Electronic Frontier Foundation (EFF) reported that the adult website Fetlife had been pressured by payment intermediaries to disallow fantasy depictions or descriptions of underage sex and incest. In 2012 EFF fought successfully against similar private censorship by a payment processor of the Nifty Archives, an online erotic fiction site that included some stories with underage characters.

Even if you may not be interested in ever reading such stories or engaging in such fantasies, and even if you may find them disgusting, such decisions set a dangerous precedent for the private censorship of other kinds of unpopular but legal content. This month’s private censorship of the Daily Stormer has only further fractured the fragile consensus that private operators such as domain name registrars and content delivery networks should remain neutral about the content that their services help to make available online.

How Child Protection, Terrorism, and Hate Speech Are All Linked

The way that we treat the unpopular speech, and the most hated criminal defendants, tells us a lot about how digital rights for the rest of us are likely to fall under attack in the future, or in countries that don’t have a First Amendment or have less regard for the rule of law.

Some people may not like the idea that members of the Virtuous Pedophiles forum should have a right to privacy, such as the ability to use the Internet pseudonymously. But if not, how is this different than denying the same privacy rights to visitors to the DisruptJ20 online forum, whose details were sought under warrant by the Department of Justice this month (the DOJ subsequently backed down)?

Some people also may not feel comfortable about pedophiles being able to discuss their sexual attractions and fantasies on the web, but if they are disallowed from doing so then the freedom of non-pedophiles, from fetishists to fanfic writers, illustrators, and abuse survivors, on forums such as Tumblr, Reddit, FetLife, AO3, and Nifty Archives, may be next to come under fire.

Hopefully most people don’t like the misuse of the Tor network or secure messaging applications by those who spread illegal child pornography, and I wish that there was a way to stop them from doing so (there isn’t). But if those applications are banned or restricted, we are condemning many innocent users such as democracy and human rights activists and journalists who use Tor for lawful purposes such as avoiding tracking, breaking through local censorship restrictions, and communicating with sources and dissidents.

These are not just hypotheticals. The content upload scanning technology PhotoDNA that was initially only used by ISPs and platforms to check your uploads against a child pornography database, has since passed into use for scanning of terrorist and violent content and has been proposed for use in copyright enforcement. Child porn filters such as the United Kingdom’s Cleanfeed have also been put forward as a model for blocking copyright-infringing websites. And although SESTA is being sold as a child protection measure in the United States, the fact that it widens liability of Internet intermediaries for user-generated content makes it a close analog to proposals in France, the United Kingdom and Germany to make platforms liable for hate speech uploaded by users.

In other words, the responses of platforms and policymakers to all of these unpopular, subversive, and disturbing uses of the Internet and technology are interlinked. Moreover, the division between those uses that are lawful and acceptable, and those that aren’t, is neither as black and white nor as immutable as it might first appear.

Balancing child protection and digital rights is not an easy and certainly not a comfortable task, although the easiest part of it is that nothing should ever be allowed to derogate from our society’s strong condemnation of child sexual abuse.

But neither should we allow the freedom of expression and privacy rights of even the most hated Internet users to to be curtailed, when they are not breaking the law or causing harm. The history of child protection being misused as a justification for overreaching and repressive Internet laws and policies shows us the risks of this very clearly. Online regulations targeting child abusers very rarely hit their mark without causing collateral damage to the digital rights of us all. Those in government and in companies need to be thoughtful about what policies they propose and ensure those policies are as narrowly tailored as possible to avoid disrupting the free speech and privacy rights of all Internet users.