Crackdown? Er, no — the sensible reality of the intimidation report

The Committee on Standards in Public Life report into intimidation in public life received a pretty substantial amount of attention before it was even published; shows how important the report was that a leak was on the front page of The Times (along with similarly-themed stories on The Sun and Daily Mail front pages). What was reported was all about the ‘crackdown’ on social media companies, which — if you actually read the report — is not quite what the committee said. Instead it was speculation on a leaked recommendation, and that then matched by violent disagreement on the basis of implications of what someone may or may not be about to say. Ironic! This was a clear attempt to set the context of the debate before the Committee had a chance to. That attenuated the sensible debate in the media, but thankfully it didn’t destroy the chance to make real progress on the issues.

Anyway, the most important take-aways from the report are two things:

- The issue of intimidation in public life through social media is real, serious, and threatens the functioning of our democracy

- Everyone has a role to play in setting a new standard of behaviour, and supporting each other as we put that standard into practice

This is evidently the case. At BCS, as we’ve looked at this topic with Demos we came to the same broad conclusion. We’re looking to play our role as an independent public-interest organisation to help bring people together — and have had constructive and positive discussions with Facebook, Twitter, and the main political parties about how that might work.

The report, and the tone of voice of the Committee, suggests a frustration with the lack of engagement; the Committee made it clear that the response from the social media companies didn’t match their expectations for what a socially responsible organisation does. Technologists and the organisations they work for must take responsibility for the role they play in society. How people respond in these circumstances is a matter of ethics and values; you either espouse positive ones and live up to them, or better to admit you don’t care. It’s hypocrisy that everyone despises.

Yet it’s not quite that simple; the speed of change is also a factor. I think that what has emerged has, believe it or not, taken the social media companies by surprise. That may sound ridiculous, but there is a fundamental misunderstanding. Social media companies are not established corporate institutions in mature markets, and that has implications. While they are smart and strategic, on some level they don’t know what they are doing, how it all works and what the outcome is, any more than the rest of us. We are all grappling with what’s happening as a consequence of social technology. Initial reactions from them have understandably had a bit of a ‘corporate risk management’ flavour, and they now need a clear and unequivocal ‘socially responsible’ flavour.

There is cause for hope. The people working in those organisations are still people, and they generally do want to work ethically, and they need to be given the opportunity to demonstrate that and act on it. What I hear from inside those organisations, and the posture they’ve had towards us, reflects that growing realisation and a genuine desire to play a positive and constructive role.

In fact, at the launch of the report Simon Milner speaking for Facebook pretty much said the same thing publicly. There was a positive acknowledgement that this was a serious issue and Facebook had a duty to participate in the fix. That’s excellent progress. Nick Pickles from Twitter has been very constructive in discussions with us, eager to make progress together. This is good news.

I’m not an apologist for the social media platforms, but looking to recognise constructive behaviour and encourage even more. If we all turn on the social media companies — and there are plenty who want to do so — it makes it substantially harder for them to engage and less beneficial if they do. This is one reason that we are looking to provide safe space and support for them to collaborate with political parties to address some of these issues. Another good reason is that this is just how our own values play out; collaboration, setting an expectation of ethical behaviour, creating a space for dialogue.

The job of work, once people get together, is to find innovative approaches to alter the ecosystem of intimidation. Not enough is known about how to practically reduce intimidation and its impact. It is therefore of extreme importance that the social media companies are involved in unearthing what works, as they are the only ones with the data and the means to analyse it. They can’t - and shouldn’t - attempt it alone but they must participate.

Taking a moment to walk through how this might go…

Imagine if a poorly-thought-through requirement to take down content was thrust on social media companies. That could damage our freedom of speech whilst not having any positive impact. If the content is up for long enough to be harmful, but the eventual take-down is brutal and captures legitimate speech, that would be very bad. Just as likely, the misconceived attempt would fail as it becomes clear it’s unworkable and undesirable, and the momentum towards solutions evaporate. Obviously that is not what the Committee report is calling for, or what helps the social media companies, or serves the public!

Another hypothetical approach — and I stray dangerously towards solution-mode, but bear with me — is as follows. If, on investigation, it turns out that 80% of intimidatory content is written or re-posted with a knee-jerk reaction, then perhaps an algorithm could suggest a cooling-off period. “Our system has detected what might be an abusive post. If you still want to post this, click ‘post’ in 2 minutes. In the meantime watch this helpful video on the impact of abuse”. Something like that would have no real impact on free speech but could have a dramatic impact on abuse. Maybe. Hypothetically.

I am not for a second suggesting that is the answer, but that we need to think creatively, investigate and innovate. With ideas and evidence and space to test, and along with the other recommendations for other participants, perhaps we could see a real positive impact. Very smart people work for these companies, and they should be given scope to address the issues in ways that make sense to the platform, the users, the full range of issues (like free speech) and ultimately of course ADDRESS THE PROBLEM. It’s too early for us to move straight to implementation on the most contentious aspects. On the other hand, it’s too late to hope it will sort itself out on its own. There are also some recommendations in the report — such as collaborative work on conduct standards and guidance — that we just need to get on with.

So for anyone who cares to approach this issue from an ethical standpoint, I think there is a good way to respond and a bad way. The bad way is easy — throw muck, settle grudges, delight in attacking; do what the report explicitly stands against. The good way is that if you don’t like a solution, come up with a better one…but whatever you do, take a bit of responsibility for solving this, don’t just pour petrol on the issue.

This is, I’m sure, the intent of the authors of this report. What I personally like most about the Committee’s report is the incredibly serious and thoughtful ethic of those who have prepared it. It’s an informative and clear tour of a dizzying array of issues from machine learning through to electoral law, and for anyone who actually reads it rather than just reacting to what someone else said about it, a lot to take in and consider. The Committee live up to their name. I don’t know that in hindsight all of it will turn out to be right, but I do know that this is a moment in history where the debate has the chance to move forward, and I for one want to ensure it does.

Tech, Politics, change for good, FBCS FIoD FRSA — views my own