Hate, Home Affairs, and Social Networks

Thoughts on the Home Affairs Committee’s “Hate crime: abuse, hate and extremism online” press release & report

The press release begins thusly; I’ll respond in-line with the thoughts and questions which it provokes; the report is much longer and will (hopefully) be dealt with in a followup posting.

HEADLINE: Biggest, richest social media companies “shamefully far” from tackling illegal and dangerous content

Really? How far should they go, and how should we measure that distance?

This is a question we shall explore, below.

BULLET: Government should consult on stronger law and system of fines for companies that fail to remove illegal content

More, below.

BULLET: Social media companies that fail to proactively search for and remove illegal material should pay towards costs of the police doing so instead

This would be enforced how?

A license fee for social media companies?

A fee for every independent blog, perhaps?

Only the big ones? Then how would the small ones be policed? At what point would a small one be “captured”?

BULLET: Social media companies should publish regular reports on their safeguarding activity including the number of staff, complaints and action taken

How many “staff” will equate to a four-thousand-node cluster of computers running machine-learning / artificial intelligence software?

What if the a safety system became entirely automated / “artificially intelligent” and the number of human “staff” dropped to one, or zero?

Would that lead to parliamentary criticism, or praise?

Would a “more effective” system need to be state-approved in order to avoid criticism (or fines?) being applied for deploying “fewer staff”?

How would this regulation chill implementation of better security systems, leading to the unintended consequence of regulation yielding less safety?

What if the complaints are orchestrated by pressure groups, attempting to artificially inflate metrics to seek leverage over a platform?

More, below.

INTRO: In a short report published today, Monday 1 May 2017, the Home Affairs Select Committee has strongly criticised social media companies for failing to take down and take sufficiently seriously illegal content — saying they are “shamefully far” from taking sufficient action to tackle hate and dangerous content on their sites.

To say “shamefully far” is to suggest that there is some finish line, some set of goal posts; where would that goal be, precisely?

Would it involve social networks policing all speech in the United Kingdom, lest the Police have to do it for themselves?

Should the social networks be mantled with the role of the Police?

Would it not be more reasonable to build a society where hate speech was not an issue?

The Committee recommends the Government should assess whether failure to remove illegal material is in itself a crime and, if not, how the law should be strengthened. They recommend that the Government also consult on a system of escalating sanctions to include meaningful fines for social media companies which fail to remove illegal content within a strict timeframe.

Does the committee therefore agree that (say) accidental removal of content which is not illegal is therefore also be a liability which users may contest in a court of law?

It seems reasonable to believe that — faced with criminal liability — social networks will tend towards protectionism and implement “over-blocking”, excessive removal or removal of even slightly-questionable material, with consequent chilling of free speech.

This may also lead to economic loss (e.g.: Newspaper Click-Throughs) thus there must be a stick to inhibit excessive blocking, as well as a carrot.

Government should, according to the report, consult on proposals requiring companies who fail to properly search for illegal material to pay towards the cost of policing and enforcement activities on their platforms.

Thus, literally, we see the committee calling for Social Networks to adopt the role of law enforcement in the United Kingdom.

Given their immense size, resources and global reach, the Committee considers it “completely irresponsible” that social media companies are failing to tackle illegal and dangerous content and to implement even their own community standards.

The large social networks already have extensive teams who are dedicated to the protection of people who use them, not least because the large social networks are for-profit companies; how come, therefore, the small, independent, unknown social networks where “extremism” may flourish unhindered, are not a point for consideration? Perhaps because they are not fit to bear fines?

Also, as an aside: when acquired by Facebook, the company “WhatsApp” had 420 million users worldwide, but only 55 employees; does this constitute “immense size & resources” from the committee’s perspective?

The Committee criticises the “unacceptable” refusal by companies to reveal the number of people they employ to safeguard users, or the amount they spend on public safety. Quarterly, transparent reports which cover safeguarding, enforcement of standards, as well as the number of staff working on safety should be published.

Again, if such figures are published, how are they to be interpreted?

Computers are meant for automation of tasks, and there are calls for the wholesale replacement of safety-people with algorithms. If even this happens, what value is there in a metric of “number of staff”?

More, below.

The Committee also criticised social media companies for putting profit before safety — noting quick action is taken to remove content found to infringe copyright rules, but that the same prompt action is not taken when the material involves hateful or illegal content. The Committee recommends that the same expertise and technology should be applied for illegal and abusive content.

This paragraph is critical, and is expanded in boldface paragraph 30 of the final report, which reads as follows, included here for context; note also the implicit call for censorship of search queries and results:

30. Social media companies must be held accountable for removing extremist and terrorist propaganda hosted on their networks. e weakness and delays in Google’s response to our reports of illegal neo-Nazi propaganda on YouTube were dreadful. Despite us consistently reporting the presence of videos promoting National Action, a proscribed far-right group, examples of this material can still be found simply by searching for the name of that organisation. So too can similar videos with different names. As well as probably being illegal, we regard it as completely irresponsible and indefensible. If social media companies are capable of using technology immediately to remove material that breaches copyright, they should be capable of using similar content to stop extremists re-posting or sharing illegal material under a different name. We believe that the Government should now assess whether the continued publication of illegal material and the failure to take reasonable steps to identify or remove it is in breach of the law, and how the law and enforcement mechanisms should be strengthened in this area.

“Copyright” enforcement is far simpler than the policing of free speech

It is possible for software to identify that “this JPEG image appears similar to a pre-existing copyrighted photograph” or “this video seems to contain background music from a pre-existing copyrighted music album”

The issue with “hateful and abusive content” is that it is often entirely novel, not pre-existing to be matched, and so must be judged from perspectives of “mens rea” and “actus reus” — perspectives of guilty intent and guilty action. Such judgement currently requires human actors, or human reporting.

Further: one would not want to censor criticism of the so-called Islamic State where such criticism “republished” keywords or fragmentary text from one of the ISIS press releases; but such errors would be possible, even frequent, with such a system. Computers are not good at recognising human intent from context, leading to “over-blocking” of a kind for which Facebook were recently criticised, censoring the “Napalm Girl” image as full-frontal nudity of a prepubescent girl.

The Committee recognises the effort that has been made to tackle abuse on social media, such as publishing clear community guidelines, building new technologies and promoting online safety for example for schools and young people but it’s very clear from the evidence received that nowhere near enough is being done. Social media companies’ enforcement of their own community standards is weak, haphazard and inadequate. Often smaller companies have even lower standards and are making less effort.

This smacks of “something must be done”; other than gut speculation, where is the evidence that “enforcement…is weak” when the committee demonstrate elsewhere that there is no extant public metric for “successful” or “proper” blocking and takedowns?

Individual cases, shorn of context, are regrettable but do not constitute demonstration of a systemic failure that the committee seeks to present.

The Committee says Government should now conduct a review of the entire legal framework around online hate speech, abuse and extremism and ensure the law is up to date. Enforcement needs to be much stronger. What is illegal offline should be illegal — and enforced — online.

This is laudable, so long as the differences between the online and offline worlds are understood and respected; but they are not.

See “stadiums”, below.

The Committee found:
- repeated examples of illegal material not being taken down after they had been reported, including; [deletia]

The committee cite that they found examples, but not how many examples they did NOT find; viz: either they are expecting a 100% perfection rate in blocking, takedowns and censorship, or else they are presenting only “failures” without presenting “successes”; this is not a way to construct evidence-based policy.

The Committee says:
• Government should now assess whether the continued publication of illegal material and the failure to take reasonable steps to identify or remove it is in itself already a breach of the law, and how the law could be strengthened in this area.

More, on law, below.

• Football teams are obliged to pay for policing in their stadiums and immediate surrounding areas on match days. Government should now consult on adopting similar principles online — for example, requiring social media companies to contribute to the Metropolitan Police’s CTIRU for the costs of enforcement activities which should rightfully be carried out by the companies themselves

The Internet is a domain of speech, not of fist fights and broken pint glasses being shoved into peoples’ faces; even with recovery of costs from football teams as cited, the Government could make quite a saving on policing by banning attendance at football matches and requiring fans to stay at home watching the match on BBC iPlayer instead — thereby neatly delineating the true difference between the online and real worlds.

Perhaps one solution, therefore, is to not conflate hate speech in social media with the hate-speech-plus-considerable-real-and-historical-physical-thuggery risks of football matches.

• Social media companies should publish quarterly reports on their safeguarding efforts, including analysis of the number of reports received on prohibited content, how the companies responded to reports, and what action is being taken to eliminate such content in the future. Transparent performance reports, published regularly, would be an effective method to radically drive up standards and encourage competition between platforms to find innovative solutions to these persistent problems. If they refuse to do this voluntarily, Government consult on forcing them to do so.

And how will these metrics be normalised for:

  • number of people who use the platform?
  • number of messages sent using the platform?
  • types of message content on the platform?
  • content which is sent/received end-to-end encrypted?
  • whether the enforcement mechanisms are automated?
  • whether the enforcement / content-review teams are outsourced globally?
  • how the amount of content (and outsource reviewers) ebbs and flows with traffic/demand?

There are so many variables as to make even the major social networks not meaningfully comparable, especially given the differing texture of each network, e.g.: Twitter, all messages are default-public; Facebook has closed groups and pages. YouTube is video (relatively time-consuming to create) and per-video commentary.

How can these platforms— or how can differing forms of abuse on these platforms — be meaningfully compared?

• The interpretation and implementation of community standards in practice is too often slow and haphazard. Social media companies should review with the utmost urgency their community standards and the way in which they are being interpreted and implemented, including the training and seniority of those who are making decisions on content moderation, and the way in which the context of the material is examined.

This statement stands upon emotive assumptions: “too often slow and haphazard” — how often is “too often”?

How would the Government count a single “blocking” event which “takes down” 10,000 “hate” images after 2 hours of deliberation regarding the intent/mens rea of anyone sharing the image?

Would that be too fast, or too slow?

What if (a-la “Napalm Girl”) that 10,000-image-deleting database query accidentally deleted appropriate Guardian, Times or Telegraph criticism of so-called Islamic State? Or what if it blocked access to a Metropolitan Police warning about extremism?

Would that 2 hours still be “too slow”?

Aside from all of this assumption, what about the fact that such content review is globally outsourced?

How do they expect context to be reviewed for more than 1 billion pieces of content, every day? [ Article: https://medium.com/@alecmuffett/a-billion-grains-of-rice-91202220e10e ]

Also: What if the communities are moderated by volunteers, internationally, such as in the case of Reddit? Would the moderators require state-approved training or vetting?

• Most legal provisions in this field predate the era of mass social media use and some predate the internet itself. The Government should review the entire legislative framework governing online hate speech, harassment and extremism and ensure that the law is up to date. It is essential that the principles of free speech and open public debate in democracy are maintained — but protecting democracy also means ensuring that some voices are not drowned out by harassment and persecution, by the promotion of violence against particular groups, or by terrorism and extremism.

…and in pursuit of restricting hate speech, we must not impinge upon the rights of those we are trying to protect, nor upon their freedoms to create forums in which they may share; however it may be, from the above, that that is precisely what’s proposed.

At length:

Chair’s comment
Yvette Cooper MP, Chair of the Committee, said:
“Social media companies’ failure to deal with illegal and dangerous material online is a disgrace. They have been asked repeatedly to come up with better systems to remove illegal material such as terrorist recruitment or online child abuse. Yet repeatedly they have failed to do so. It is shameful. These are among the biggest, richest and cleverest companies in the world, and their services have become a crucial part of people’s lives. This isn’t beyond them to solve, yet they are failing to do so. They continue to operate as platforms for hatred and extremism without even taking basic steps to make sure they can quickly stop illegal material, properly enforce their own community standards, or keep people safe.
“In this inquiry, it has been far too easy to find examples of illegal content from proscribed organisations — like National Action or jihadist groups — left online. And while we know that companies for the most part take action when Select Committees or newspapers raise issues, it should not take MPs and journalists to get involved for urgent changes to be made. They have been far too slow in dealing with complaints from their users — and it is blindingly obvious that they have a responsibility to proactively search their platforms for illegal content, particularly when it comes to terrorist organisations. Given their continued failure to sort this, we need a new system including fines and penalties if they don’t swiftly remove illegal content.

The chair’s comment recaps the same assertions-without-context-nor-metrics that the rest of the report suffers; emotive language (“They have been asked repeatedly to come up with better systems…Yet repeatedly they have failed to do so” — how would they know?) and urgency, without evidence.

Finally, Yvette Cooper writes:

“Social media companies need to start being transparent about what they do. The idea that they can’t tell us what resources they put into public safety for commercial reasons is clearly ludicrous.

What if what they told you were truthful facts that would not be meaningful to anyone outside that company?

What if these “transparency” metrics were effectively sui-generis, i.e. were numbers that would not be meaningful to anyone or anywhere else?

What hue and cry would result? Because the numbers will be so.

“The government should also review the law and its enforcement to ensure it is fit for purpose for the 21st century. No longer can we afford to turn a blind eye.”

The law is Parliament’s business; but it’s well for the law to be informed by more than mere calls that “something must be done!”

Press release, follows: