On Online Harms and Folk Devils: Careful Now

Vic Baines
30 min readJun 24, 2019

--

I have been asked by a number of friends and colleagues what I think about the UK government’s recent White Paper on Online Harms. It’s a long read (over 100 pages in total), and I have read it carefully. I’m glad I waited a few weeks to comment. It has given me the chance to observe some of the other responses that should perhaps serve as warnings to policy makers and legislators. My critique here is inevitably more of a linguistic one than some others you will read, but I hope it will be illuminating for that.

My PhD is in classical rhetoric, so I always have one eye on how someone argues a case: the language they use, and the evidence they provide. When legislation is under consideration, it’s particularly important that any response is based on robust assessment of the threat or harm we’re seeking to reduce. Now, the White Paper released by the government just over a month ago does not explicitly claim to be that assessment. Nevertheless, any such document should be scrutinised for its analytical quality and argumentation. We would expect it to be authoritative, with a sound evidence base. My reading of the White Paper is that it is lacking in both respects. I have identified a number of points where responses appear to have been proposed based on dangerous assumptions that do not stand up to scientific examination. Before I respond to the paper itself, I should share with you some information on my professional background, and the context of the UK government’s proposed regulatory action.

I spent a decade working in national and international law enforcement, analysing cybercrime, child sexual abuse and fraud for Europol and SOCA, the predecessor of the UK National Crime Agency. I then spent four years working for Facebook as their liaison officer for law enforcement in Europe, the Middle East and Africa. In that time, I and my colleagues helped police in countries all over the world find missing children, recover others from offline sexual abuse, and prevent terrorist attacks.

You don’t hear about this in the media for a number of reasons. Firstly, global tech companies stopped talking about their work with law enforcement on safety issues following Edward Snowden’s revelations about PRISM. The fear was that any cooperation with government authorities would be interpreted as complicity in (to the best of my knowledge fictitious, or at least misrepresented) mass surveillance. The absence of easily accessible and publicly available information on the subject has left considerable room for governments and the media alike to make the assumption that tech companies do nothing and care nothing for keeping people safe on their platforms.

Secondly, whether coincidentally or by design, governments saw this as an opportunity to go on the offensive. This played out against, and perhaps thanks to, a series of terrorist attacks conducted in the name of Islamic State from 2015 onwards. I was on the receiving end of government criticism at the time, and I saw the same news cycle play out again and again in France, Belgium, Germany and the UK. An attack occurred, after which the media asked how the government could have allowed this to happen. Concerns were expressed over possible intelligence failures; in response the government blamed tech companies — either for enabling radicalisation, or for not identifying warning signs, or for not sharing data relevant to an investigation.

Now, the companies themselves would admit they had work to do on the first of these. On the second, it’s a much wider question of whether we the people accept proactive monitoring of our private messages, and opinions differ on that. The transfer of personal data from one jurisdiction to another is a matter of international law — mutual legal assistance (MLA) to be precise, not companies’ unwillingness to help. But consistently, you will see it presented as the latter.

At least since the Intelligence and Security Committee’s 2014 report into the terrorist murder of Lee Rigby, we have become accepting of the idea that social media companies kill people. We are routinely exposed to headlines such as “Facebook ‘could have prevented Lee Rigby murder’” and, more recently, “Instagram ‘helped kill my daughter’”. While there is no evidence for these claims as they have been presented by the media, the implication that tech firms bear sole responsibility for online safety issues has taken root. The dominant political and mainstream media tactic appears to be to lay these problems squarely at the doors of the companies — ignoring the important roles to be played by central government, law enforcement, local authorities, educators, civil society groups, parents and carers (the list goes on). It’s important that we draw attention to the extent to which this assumption and others are the founding premises for the measures suggested in the white paper on online harms.

A question of prevalence

We have a reasonable expectation that responses to crime and safety problems will not only be based on evidence, but also be proportionate to the size of the problem. It stands to reason that policy makers therefore need to know how prevalent various online harms are before they can combat them effectively. This statement in the executive summary of the White Paper grabbed my attention:

“Given the prevalence of illegal and harmful content online, and the level of public concern about online harms, not just in the UK but worldwide, we believe that the digital economy urgently needs a new regulatory framework to improve our citizens’ safety online.”

Later on in the document, the authors elaborate further:

“Illegal and unacceptable content and activity is widespread online, and UK users are concerned about what they see and experience on the internet. The prevalence of the most serious illegal content and activity, which threatens our national security or the physical safety of children, is unacceptable.” (2)

This implies that the authors know the prevalence of illegal and harmful content online [I’ll deal with the level of public concern later]. They cannot possibly know this, because that data set doesn’t exist.

We know a certain amount about the amount of child sexual abuse material reported to hotlines established to receive reports from members of the public, such as the Internet Watch Foundation in the UK, and other hotlines who are members of the INHOPE network [full disclosure: I serve on the Advisory Board of INHOPE]. Both organisations publish statistics about reports received, and usefully the geographical location of hosting of child sexual abuse material. Not every country has a hotline, so these numbers reflect only those incidences in which a member of the public has been minded to report material to the hotlines that exist. At the time of writing, INHOPE has 46 member hotlines in 41 countries.

What we don’t know is how much child sexual abuse material has not been found, or how much is seen but not reported, either because the viewer has not been minded to report, or because there is no national hotline to receive it. Project Arachnid, a crawler developed by the Canadian Centre for Child Protection (C3P), identified detected 5.1 million unique web pages hosting child sexual abuse material in a six week period. Facebook, meanwhile, reported having removed 8.7 million items of child sexual abuse material in a three month period. Viewed simply in terms of numbers, we can be reasonably confident that this is a big problem worthy of government attention.

We have some prevalence data on child sexual abuse material because there has been a concerted effort to collect it in the last ten years. To the best of my knowledge, we do not have similar data for terrorist content online (although I would love to be corrected on that one). In fact, I’d go so far as to say that I’m pretty certain the UK government does not have prevalence data on the absolute amount of terrorist content on the web. They may well have a figure for the pieces of content identified by the Met Police’s Counter-Terrorism Internet Referral Unit within any given time period. As Home Secretary in 2015, Theresa May stated that the CTIRU was “taking down about 1,000 pieces of terrorist material per week from the internet” [there are a number of things wrong with this statement, but I’ll leave them for another time]. In terms of numbers alone, it certainly doesn’t seem as prevalent as child sexual abuse material.

Viewing online harm in terms of numbers alone is of course crass in the extreme. We also need to assess impact, which I will discuss in greater detail below. The White Paper mentions the “level of public concern” as an impetus for a regulatory response to online harm. At several points, it refers to Ofcom’s reporting on citizens’ exposure to and concerns about a variety of harms online. Keep that word “variety” in mind — we will need it later.

The latest edition of Ofcom’s Online Nation report is useful in that it allows us to compare exposure and concern in the same population. Some findings of note in the data presented in the interactive report:

  • People surveyed aged 16+ appear to be more concerned about having their data stolen than they are about child sexual abuse images.

This of course begs the question of how one measures a level of concern. When we say that something is of greater concern to us, do we mean that it troubles us more often, or that we consider it to be more serious? If the latter, then I must confess to being troubled by the fact that adults in the UK appear to be more concerned about financial loss than the rape of children. I have already emphasised elsewhere that we all have urgent work to do to tackle online child sexual exploitation as a public health issue. The figures above do make me wonder to what extent persistent sensational media coverage on the issue may in fact have desensitised the public, to such a degree that two thirds of the population do not express concern over it. It would be interesting to test that theory.

The next question we must answer is, what is the threshold for acting on public concern? If we apply ‘referendum logic’, wouldn’t we need more than 50% of the population to be concerned about something before the government could be said to have a mandate to act upon it? Among the key indicators of this report, only one concern affects more than half the population. It is the concern expressed by 55% of participants over 16 that children will experience bullying, abusive behaviour and threats.

  • The data also reminds us that we should not confuse concern with prevalence.

According to the data, more people have received spam emails (34%) than are concerned about them (28%). That in itself sounds reasonable, although I doubt the inference that 66% of people have not come across spam in the last twelve months. Who doesn’t receive spam these days?

Encouragingly, just 2% of people surveyed in the UK have encountered CSAM, which suggests that efforts to remove material from public online spaces are proving successful. We should not assume that this figure is representative of other countries, or of the internet as a whole.

What is clear from Ofcom’s data is that there is a mismatch between people’s concerns about online harms and their actual experience of them. It’s not the first time we have seen such a discrepancy in the UK. Remember the British Crime Survey? This has demonstrated consistently that fear of crime is greater than recorded crime. Recognising that many crimes go unreported, and that recorded crime can therefore never be a truly accurate reflection of total incidence, both the British Crime Survey and Ofcom’s Online Nation report should discourage policy makers from delivering responses that are based purely or primarily on levels of public concern. An evidence base demonstrating actual or recorded prevalence and impact should also be taken into consideration and, as I mentioned above, this is something we don’t have for many of the harms identified in the government’s White Paper.

The differences highlighted should also prompt a public debate about our tolerance of harm, and should lead us to consider whether our expectations of experiencing harm online are out of kilter with those offline. None of us go through life expecting that our home will never be burgled or that we will never witness violence in the street. By this token, we may have a reasonable expectation of exposure to spam, violent online content or unwanted contact at some point. As far as I know, there is no agreed tolerance level for experience of online harm. When the government states that “voluntary efforts [by tech companies] have not led to adequate or consistent steps to protect British citizens online” (2.15), we are entitled to ask what the government considers to be adequate. That is not explained in the White Paper. Rather, increasingly we see governments and policy makers having no tolerance of harm whatsoever. Its very existence is presented as a measure of failure on behalf of online service providers.

When the House of Lords’ Select Committee on Communications reports that “Public opinion is growing increasingly intolerant of the abuses which big tech companies have failed to eliminate”, it applies an expectation of elimination that is inconsistent with the offline world. Even in the cybersecurity industry it is generally accepted that there is no such thing as absolute security. Companies take out cyber risk insurance on that very basis, expecting to be compromised at some point, and covering financial and reputational loss on condition that they have taken some steps to reduce the risk of a breach.

That bad people do bad things to other people is a constant. We are on a hiding to nothing — and will arguably misuse resources in the attempt — if we chase the an ideal of complete freedom from abuse and harm that is at odds with human behaviour. The efforts of tech companies to reduce or minimize harm are doomed to failure, and our further censure, if governments set the bar unrealistically high. Zero tolerance is a useful political concept, but it is operationally unhelpful.

Not all harms are equal

When talking about tolerance of harm, it’s clear that not all harms are equal. We are naturally more prepared to accept that we may receive the odd spam email than we are to accept the misuse of online platforms to stream the live sexual abuse of children across the world. The response to any harm should be proportionate not only to its prevalence but also its impact. Rescuing children from ongoing sexual abuse, preventing offenders misusing communications technology in this way, should clearly be a priority for online service providers. There is international consensus on this: child sexual exploitation and abuse is outlawed in international legal instruments, to which the majority of countries adhere.

When the authors of the White Paper state that “A key element of the regulator’s approach will be the principle of proportionality. Companies will be required to take action proportionate to the severity and scale of the harm in question” (3.4), also that “the regulatory approach will impose more specific and stringent requirements for those harms which are clearly illegal, than for those harms which may be legal but harmful, depending on the context” (3.5), they are acknowledging that the harms described in the document vary in severity. But by the very act of including activities such as sexting and transmission of material recorded in prisons in the same document as child sexual abuse, the authors suggest they are in some way comparable and that they allow for a similar level of debate on their acceptability. They are not and they do not. By presenting the government’s approach to online harms as somehow novel and innovative, the authors also negate significant international progress made in the fields of countering child sexual abuse and terrorism.

Progress has been made in those areas because governments, civil society groups and tech companies have been able to achieve consensus in the definition of harm and the appropriate response. But even in the area of counter terrorism we are lacking an internationally agreed definition of what constitutes “online terrorist content”. The US government’s refusal to sign up to the recent Christchurch Call to action against online extremism — signed by 17 countries and 8 companies — coincided with the White House’s launch of a campaign against online censorship, thereby indicating that the government is now out of step with the largest US tech companies. Google, Facebook, Twitter and Microsoft all accept that content removal or restriction is required in the interests of public safety.

Lack of consensus also leaves room for countries to regulate according to their own definitions of what is acceptable online. This is not only an operational challenge for platforms with global user bases, but risks further balkanising internet services and restricting free access to information, particularly in countries where opposition to the dominant regime is constructed as terrorism.

If we can achieve international consensus, measures such as fines for individual members of tech company senior management, and direction from the government concerning child sexual abuse and terrorist material (19, 21) become less controversial. Without consensus, lack of international coordination may result in platforms being fined for non-removal of material that may be subject to active investigation by law enforcement in another country.

Let’s return to the issue of sexting, and specifically the sharing of sexual imagery by young people. About this the authors of the write paper state: “Sharing sexual images can expose children and young people to bullying, humiliation, objectification and guilt. These images can be shared widely and appear on offender forums or adult pornography sites, or be used to extort further imagery. This puts children and young people in a vulnerable position and at risk of harm. It is a criminal offence to produce, possess or share sexual images of under 18 year olds.” (Box 10)

Inclusion of sexting as a harm is controversial. Academic research increasingly views risk-taking behaviours online such as the exchange of sexual messages and imagery as a natural part of a young person’s development, and advocates for distinctions between consensual and non-consensual sharing. Repressive approaches to these behaviours risk not only denying young people sexual agency and opportunities to navigate and manage risk, but also criminalising young people [for more on this, see Hasinoff’s excellent Sexting Panic]. While in practice UK law enforcement does not prosecute children for sharing sexual images they have produced of themselves, the lasting impression of the government statement above is that sexting is a criminal matter. This seems something of a backward step, which runs counter to the dominant trend in child protection in recent years. One hopes that this is simply a case of clumsy drafting by a non-specialist.

Beware mission creep

As noted above, the definition of content transmission from prisons as a harm is erroneous, and its inclusion in this document is arguably disrespectful to the victims of serious offences including child sexual abuse. When the authors of the White Paper state that “prisoners openly uploading content from prisons can also undermine public confidence in the prison service”, they are distracting attention from the real issue, namely that prison authorities are struggling to enforce bans on the use of unauthorised communications devices and controlled drugs by inmates. When these attempts fail, video evidence showing inmates under the influence of drugs or subjected to violent assault is a matter of public interest. Responsibility for policing this should lie with the government, not with tech companies. The harm is to the government’s reputation, and is not of the same order as serious criminal offences that result in physical or psychological injury to citizens.

The inclusion of this item in the White Paper should encourage us to scrutinise more closely some of the other harms listed, and the language used to describe them. Terrorism is quite reasonably included, but it is worth noting that at several points the more general phrase “national security” is used. For example:

“The regulator will not compel companies to undertake general monitoring of all communications on their online services, as this would be a disproportionate burden on companies and would raise concerns about user privacy. The government believes that there is however, a strong case for mandating specific monitoring that targets where there is a threat to national security or the physical safety of children, such as CSEA and terrorism.” (3.12)

This is further emphasised in the summary to Part 3 of the document: “Where there is a threat to national security or the physical safety of children, such as CSEA and terrorism, we will expect companies to go much further and demonstrate the steps taken to combat the dissemination of associated content and illegal behaviours. We will publish interim codes of practice providing guidance about tackling terrorist activity and online CSEA later this year”.

So, rather than having carve outs specifically for child sexual abuse and terrorism, terrorism is simply one example of an instance where the government could mandate targets to tech companies in the interest of national security. This reads as unnecessarily sloppy at best, at worst consciously worded to enable the regulator to direct companies to respond to other national security concerns. That the latter may be the intention is perhaps indicated by frequent references in the document to “democratic values” and the British “way of life”, particularly in relation to the phenomenon of disinformation. For example:

“There is also a real danger that hostile actors use online disinformation to undermine our democratic values and principles. Social media platforms use algorithms which can lead to ‘echo chambers’ or ‘filter bubbles’, where a user is presented with only one type of content instead of seeing a range of voices and opinions. This can promote disinformation by ensuring that users do not see rebuttals or other sources that may disagree and can also mean that users perceive a story to be far more widely believed than it really is.” (4)

Malicious distribution of false information (disinformation) is not the only such activity within the government’s regulatory scope:

Threats to our way of life

1.22 The UK’s reputation and influence across the globe is founded upon our values and principles. Our society is built on confidence in public institutions, trust in electoral processes, a robust, lively and plural media, and hard-won democratic freedoms that allow different voices, views and opinions to freely and peacefully contribute to public discourse.

1.23 Inaccurate information, regardless of intent, can be harmful [my italics] — for example the spread of inaccurate anti-vaccination messaging online poses a risk to public health. The government is particularly worried about disinformation (information which is created or disseminated with the deliberate intent to mislead; this could be to cause harm, or for personal, political or financial gain).

1.24 Disinformation threatens these values and principles, and can threaten public safety, undermine national security, fracture community cohesion and reduce trust.

1.25 These concerns have been well set out in the wide-ranging inquiry led by the Digital, Culture, Media and Sport (DCMS) Select Committee report on fake news and disinformation, published on 18 February 2019. This White Paper has benefited greatly from this analysis and takes forward a number of the recommendations. The government will be responding to the DCMS Select Committee report in full in due course. We also note the recent papers from the Electoral Commission and Information Commissioner’s Office on this and wider issues, and are considering these closely.”

I’ve reproduced this section in full because it demonstrates apparent confusion in the government’s approach to false information. Again, I hope this is just sloppiness rather than deliberate. A large part of the anti-vaccination information in circulation is not targeted or state sponsored activity. It should be stated clearly that the responses to deliberate/targeted and passive/unwitting distribution should be different. Reference to the DCMS Select Committee report in the excerpt above reminds me of their very broad definition of “fake news” in their interim report from last year:

▪ Fabricated content: completely false content;

▪ Manipulated content: distortion of genuine information or imagery, for example a headline that is made more sensationalist, often popularised by ‘clickbait’;

▪ Imposter content: impersonation of genuine sources, for example by using the branding of an established news agency;

▪ Misleading content: misleading use of information, for example by presenting comment as fact;

▪ False context of connection: factually accurate content that is shared with false contextual information, for example when a headline of an article does not reflect the content;

▪ Satire and parody: presenting humorous but false stores as if they are true. Although not usually categorised as fake news, this may unintentionally fool readers.

Satire was the focus of my doctorate, so its inclusion in this list naturally worries me. Here too, intention is dispensed with as a measure, leaving open the possibility that not only targeted activity by nation states will be regulated. The scope is too broad as it stands. It needs to be more clearly specified.

Finally, as if to prove that the government’s thinking on regulation is very much in flux, the harm of “Interference with legal proceedings” suddenly appears towards the end of the White Paper, at 7.38. It has not previously been identified as a harm, and it is not included in the table of “Harms in Scope” (p.31). So, once more we are led to question whether this is a well-meaning oversight. The overall impression is of a document written by a number of authors who did not check each other’s work, and therefore did not ensure consistency of content or argumentation. The document’s statements on privacy prompt similar concerns.

The tension between privacy and security is real

In discussing the activities of the proposed regulator, the authors of the White Paper state that “reflecting the importance of privacy, any requirements to scan or monitor content for tightly defined categories of illegal content will not apply to private channels” (33). The large US tech companies already scan private messages for child sexual abuse material using PhotoDNA and other technologies, so it seems strange for the UK government to step back from practices already in use. This section of the document reads as if the government is glossing over the very real tension between privacy and security when it comes to public safety online. I would wager that as a society we have moved beyond the expectation that people can exchange child sexual abuse material privately, so long as they don’t broadcast it.

Equally, when the authors state that “the regulator will have a legal duty to pay due regard to innovation, and to protect users’ rights online, taking particular care not to infringe privacy or freedom of expression. We are clear that the regulator will not be responsible for policing truth and accuracy online” (36), this seems at odds with the very general description of what constitutes a national security issue within the remit of the regulator, and broad definitions of false information earlier in the document.

Indeed, the government’s own vision highlights some of these key tensions that need to be teased out with concerted public consultation:

“12. Our vision is for:

- A free, open and secure internet.

- Freedom of expression online.

- An online environment where companies take effective steps to keep their users safe, and where criminal, terrorist and hostile foreign state activity is not left to contaminate the online space.

- Rules and norms for the internet that discourage harmful behaviour.

- The UK as a thriving digital economy, with a prosperous ecosystem of companies developing innovation in online safety.

- Citizens who understand the risks of online activity, challenge unacceptable behaviours and know how to access help if they experience harm online, with children receiving extra protection.”

- A global coalition of countries all taking coordinated steps to keep their citizens safe online.

- Renewed public confidence and trust in online companies and services.

This vision is admirable but unachievable in its current state. Where content and use of services is restricted for some, there is inevitably less freedom and openness. In order for this vision to succeed, therefore, thresholds of acceptable behaviour need to be agreed. There also needs to be a recognition that tackling unacceptable behaviour entails a limit to freedom and privacy.

It’s worth noting that a number of the key components of this vision are largely missing from this White Paper. Establishing rules and norms, discouraging harmful behaviour and improving citizen understanding are crucial and laudable, but almost entirely absent. By focusing almost exclusively on the “big tech must do more” angle, the government has missed an important opportunity to tackle these social problems comprehensively with a public health approach. Taking self-injury and suicidal ideation as just one example, it cannot be the responsibility solely of tech companies to provide solutions to the woeful under-resourcing of mental health provision for young people; for illegal activity, we should also give due consideration to the impact of continued cuts to frontline policing in the UK.

The unintended consequences of emotional policy responses

One of the first things you learn as a threat analyst — indeed as any kind of credible researcher — is to leave your emotions at home. Yours is the objective assessment of behaviour that is often criminal, distasteful or abhorrent. As someone who has analysed child sexual abuse for a number of years, I see it as my job to provide factual examinations of offending and victimisation, devoid of sensation, the better to help policy makers respond effectively and proportionately. I look for the same in others. You could say that dispassionate policy making is one of the things I’m passionate about.

So I when I see that an evidence base is lacking, I naturally suspect that responses may be emotionally driven. This is what I think has happened in the White Paper on online harms, and it is characteristic of the wider debate on online safety at present. Where robust evidence is lacking, beliefs and anecdotes can be taken to be representative. The much exploited case of the death of Molly Russell serves to illustrate this. Molly’s father believes that viewing images of self-injury and suicidal ideation contributed to her taking her own life. That may well be so, but there is no publicly available data to demonstrate this. Until there is, it would be reasonable for society to suspend judgement, and refrain from making policy on the basis of one case where the link between social media usage and suicide has not been proved. Accordingly, the report of the All Party Parliamentary Group on social media and young people’s mental health has highlighted the urgent need for a robust evidence base on the topic. And yet, Instagram changed its content policy in response to ongoing media pressure, banning graphic images of self-injury.

As I pointed out at the time in a number of radio interviews, this could have the unintended consequence of discouraging young people in crisis — harming themselves right now — from sharing their distress and receiving assistance. When I worked at Facebook, I was personally involved in working cases where the authorities were able to safeguard a user based on a report from the company on their apparently suicidal content. Since then we have also heard how those recovering from self-injury feel they are prevented from sharing their stories of recovery. By this token, regulation concerning images of self-injury and suicidal ideation could perversely lead to more children dying. I have no evidence for this. I’m simply following the logic, and I find it hard to believe that a chilling effect on this issue is what we want for children in the long term.

Generalisation from one example is a rhetorical device of long standing well known to politicians and the media. So is the use of emotional language, designed to arouse pathos for the speaker’s cause. Breaking the rule of objectivity of threat analysis, the White Paper makes frequent use of emotional language, conveying to readers not only how the government feels about the activities they identify as online harms, but also by implication how citizens should feel about them. For example, this introductory paragraph is designed to persuade that the current situation is intolerable, and to justify government action [my italics]:

“The most appalling and horrifying illegal content and activity remains prevalent on an unacceptable scale. Existing efforts to tackle this activity have not delivered the necessary improvements, creating an urgent need for government to intervene to drive online services to step up their response.” (1.1.5)

Leaving aside those contentious words “prevalent” and “unacceptable”, what each of us finds appalling or horrifying is subjective. We may reasonably argue that it is not the government’s place to tell citizens by what they should be appalled or horrified, nor to assume that all citizens calibrate these in the same way. At worst, we may be forgiven for thinking that the government may be looking to impose its own standards of taste on British society. Now, illegal content such as child sexual abuse material is clearly appalling to the majority of citizens, and rightly so. But this can’t be said of all the activities listed as harms in the White Paper.

On terrorism, the authors of the White Paper state: “Terrorists also continue to use online services to spread their vile [my italics] propaganda and mobilise support (see Box 2). Terrorist content online threatens the UK’s national security and the safety of the public.” (1.1.8) The word “vile” in any context is subjective, sensationalist even. I have seen it used before in the context of online harm, in former Prime Minister David Cameron’s comments about cyberbullying in 2013.

The inclusion of emotional language in the headlines of the document is designed to rouse support but is unscientific. Elsewhere in the White Paper, the authors equate size with risk of harm: “…the regulator’s initial focus will be on those companies that pose the biggest and clearest risk of harm to users, either because of the scale of the platforms or because of known issues with serious harms.” (31)

The rhetorical equation of the platforms themselves with harm and distaste is not only inaccurate, it also establishes an ideological opposition between the government and technology companies which is counter-productive to the proposed regulator’s future work with online platforms, particularly its objective of “developing a culture of transparency, trust and accountability” (3.13).

The need for a robust evidence base

The authors of the White Paper acknowledge the requirement for further evidence of risks and threats related to online harms, stating, “The regulator will take a risk-based approach, prioritising action to tackle activity or content where there is the greatest evidence or threat of harm, or where children or other vulnerable users are at risk. To support this, the regulator will work closely with UK Research and Innovation (UKRI) and other partners to improve the evidence base” (35). This appears to suggest that further evidence is not required to prove the need to respond with regulation, which in turn prompts us to scrutinise more closely the material presented as evidence in this paper. I’ve selected a couple of examples that lead me to believe the authors may either have misinterpreted data — benignly or otherwise — or should have sought more robust evidence to which to refer. On online child sexual exploitation and abuse (OCSEA), the report states:

“There is a growing threat presented by online CSEA. In 2018 there were over 18.4 million referrals of child sexual abuse material by US tech companies to the National Center for Missing and Exploited Children (NCMEC).4 Of those, there were 113, 948 UK-related referrals in 2018, up from 82,109 in 2017. In the third quarter of 2018, Facebook reported removing 8.7 million pieces of content globally for breaching policies on child nudity and sexual exploitation.” (1.1.6)

Now, when an analyst states that a threat is growing, it’s good practice to produce evidence that demonstrates an increase. In this case, the authors could have provided evidence that the number of web domains reported for CSAM, or the number of people convicted of OCSEA offences, had grown year on year. Citing data for the size of the problem now — as the authors have done here — does not demonstrate that the threat is growing. It can only demonstrate that there is a big problem. That kind of sloppiness alerts my suspicions, and these are confirmed by the evidence cited, all of which demonstrates something, but not what the authors are trying to argue here. I suspect in particular that I know more about NCMEC reports from US tech companies than the authors do, and here’s why.

US tech companies use Microsoft’s PhotoDNA to identify and block known CSAM from their platforms. Because all photos and videos uploaded to their sites are run through a filter before they are shared or online, reports to NCMEC include attempted uploads that were unsuccessful. In this respect, US companies are doing exactly what they should, ensuring that this material does not appear on their platforms. These reports can be a valuable source of intelligence to law enforcement who may not previously have been aware of offending by these individuals and, more often than not, evidence that companies are preventing circulation of CSAM. There are recognised challenges around identifying previously unseen material, such as new sexual images produced by young people themselves, which some companies are now working on. Reference

The numbers also include attempts to share some viral material by people who don’t have a sexual interest in children, and who don’t realise that they are committing an offence by sharing material out of misguided humour. So, as an advocate for the public health approach to OCSEA, here is what the NCMEC report data tells me:

• There are a large number of people trying to commit OCSEA offences. Some of them will have a sexual interest in children, and that needs to be managed as a social issue. Tech companies can’t fix that.

• There is an opportunity for governments, civil society organisations and online services to remind people of the laws concerning OCSEA. Effective awareness campaigns could reduce the number of reports made to NCMEC.

Meanwhile, the numbers released by Facebook concerning content removals not only echo the points above, but also rest on a wider definition based on the platform’s terms of service, which prohibit all forms of nudity, not merely that which is deemed to be CSAM. Remember the outrage in Norway when Facebook removed the so-called Napalm Girl photo? They did so because they were applying their policy prohibiting child nudity. Cases like this, and other “innocent” images of children, will be included in the 8.7 million cited, because Facebook’s terms of service are stricter than the law. So the figure should not be taken to representative of the scale or growth of the threat of OCSEA specifically.

I’m taking it as a given that tech companies must do their absolute utmost to combat OCSEA on their platforms. But I’m also recognising and highlighting that the onus of the White Paper on tech companies does not do justice to the problem of OCSEA or its victims. By focusing on the companies’ efforts, an opportunity has been missed to deliver a more comprehensive response that engages all the relevant stakeholders in society.

My second example is rather brief, and it is one of circular referencing. When seeking to demonstrate the role of online communications in terrorism, the authors of the White Paper state, “All five terrorist attacks in the UK during 2017 had an online element, and online terrorist content remains a feature of contemporary radicalisation.” (1.9) The evidence cited for this is “Speech at Digital Forum, San Francisco by the Rt Hon Amber Rudd, 13 February 2018”. A speech by a politician does not an evidence base make. What’s needed is a reference to the data behind this statement, and this takes us back to the issue of transparency.

Greater transparency is required, and not only from companies

Transparency has been one of the dominant trends in regulation in recent years. It is, for example, one of the key principles of the General Data Protection Regulation (GDPR). Moreover, it’s my belief that one of the reasons why the tech companies are facing regulation now is that both governments and the public have been lacking access to data about both the prevalence of the listed harms on online platforms and what companies are doing to combat them.

The authors of the White Paper are absolutely right to demand greater transparency from tech companies, and they do so at a number of points in the document (for example, 2.12, 47). They also draw attention to the extent to which users may not be aware of how companies process their data and shape their news consumption (47). So, now I’m going to ask you a couple of questions. I’m aware that I may have a somewhat specialist readership. If you are an online rights advocate, law enforcement officer, child protection expert or similar, I would ask that you think yourself into the brain of an “average citizen” for a moment.

• How much do you know about how your government fights terrorism, child abuse or fake news?

• And how much do you know about the effectiveness of those responses?

It seems to me that we may have double standards at play, and that it is within governments’ power to improve transparency and communicate their own efforts more clearly. This would not only be a gesture of good will towards tech companies; it would also ensure greater accountability and quality control.

A few months ago, archive.org provided us with an illustration of how governments don’t always get things right when it comes to content restriction. In a blog, they outlined how they had received 500 mistaken “takedown” requests from France’s Internet Referral Unit concerning alleged terrorist propaganda. Pages for academic research, US government produced broadcasts and reports, and major collections of TV news were on this list. The URLs also allegedly included live recordings of the Grateful Dead and pages belonging to the Project Gutenberg book archive. In my professional experience I have seen that, like any other industry, law enforcement requires quality standards and minimum thresholds to which to adhere, and should be held accountable when these are not met. In this case we might never have known about the slip in quality had not the receiving company gone public.

We need to take stock before moving any further forward

There is ample evidence that real harm is experienced online. That is not in doubt. What I have questioned is whether governments are sufficiently informed to be able to make decisions about what are the appropriate, proportionate and most effective responses. In order for them to be so, a large part of the data on prevalence and existing mitigation measures needs to come from the platforms.

As a former government threat analyst, I can’t help but be concerned when I see glaring gaps in evidence, misinterpretation and misrepresentation of data, generalisations from specific cases, inconsistency between the harms listed in different sections and emotional language on subjects about which we absolutely need to be as objective as is humanly possible. In addition, there is very little apparent engagement in the document with existing academic research on frameworks for understanding which online activities might be harmful — for example, Oxford’s taxonomy of cyber harms, or indeed the 2017 literature review of children’s online activities, risks and safety, produced by the UK Council for Child Internet Safety (UKCCIS) Evidence Group.

The latter contains findings that are important for our understanding of children and young people’s experience of harm online, among them that “Most research is on children’s exposure to risk, with too little on which children come to harm and why, or what the long-term consequences are” (p.3). It arguably should serve as the core evidence for any UK regulatory action on online child safety, but is referenced just twice: once in relation to the positive impact of the internet on young people (Box 7), and once in relation to sexting prevalence (Box 10).

As it stands, the White Paper on online harms is not an evidence base but a political moment. For now, the UK government is simply insufficiently informed to regulate effectively. There remains an opportunity — albeit a fairly small window — for tech companies to share more data with researchers in this space, so that policy responses can draw on more robust evidence. Here’s what else I would like to see:

• Evidence on prevalence and impact to be strengthened before regulation.

• Policy measures that draw on an independently validated taxonomy of online harms.

• Government authorities subject to quality control in their demands to tech companies, to ensure they are correctly calibrated to the nature of the threat, and to prevent mission creep.

• A cooperative approach to tackling online harms, on the model of the We Protect Global Alliance to End Child Sexual Exploitation Online, and the Christchurch Call to Action on terrorism.

I worked in big tech for some years and in law enforcement for a decade. I’m also an advisor to some of the services providing front line support in relation to child sexual exploitation and abuse. I’m better placed than most to see the bigger picture on debates around online harm. The White Paper on online harms makes mention of the We Protect Global Alliance (WPGA) as follows: “The success of the UK government funded WPGA is that it has brought together government, law enforcement, industry and civil society to take a stand against online child sexual exploitation.” (Box 21)

I couldn’t agree more, and am happy to bear witness to the alliance’s ability to deliver cooperative rather than punitive responses to an agreed set of harms. In passing, the White Paper provides the government with its own solution. If only they could see it.

--

--

Vic Baines

Former Facebooker and Europol officer. Purveyor of future oriented musings on cybercrime, cybersecurity and internet diplomacy.