TikTok, Social Media Hate And National Security — Time To Bring In The Auditors

Max Beverton-Palmer
Digital Diplomacy
Published in
5 min readJul 31, 2020

--

We can’t make social media safe from the Chinese government or from anti-Semites without proper information and regulation. We need to open up the bonnet of social media to independent scrutiny and expert audit.

We open in an Essex garden. A woman exclaims “what a lovely evening — wish the girls were here for a beveragino”, the cue for more Essex women to jump out of the bushes asking “did someone say beveragino?”. Simple, irresistibly joyful, and viral.

Spend an hour on TikTok watching this, swimming pool pranks, and dance videos and it’s difficult to imagine that the platform is considered a national security risk so serious that it could be banned.

After the UK government followed the US and announced that Huawei is to be removed from UK 5G networks by 2027, attention has turned to TikTok. Mike Pompeo has said that the US is considering a ban, India has already banned it and Japan’s lawmakers are considering restrictions. But the risk that TikTok presents to the West is not the same as the one presented by Huawei. Banning TikTok would not be the same as removing telecoms equipment from core communications networks.

The concerns about TikTok and its relationship with the Chinese government centre around the sharing of personal data, political censorship, surveillance, subtle manipulation of audiences, and poor moderation standards around harmful content. But apart from the question of outright censorship of Chinese political content, these are classic concerns levied constantly at US-owned social media companies.

The heart of the problem is not only geopolitics, but a damaging information asymmetry between society and social media platforms. Governments, regulators, and civil society simply don’t know enough about the platforms, the way they moderate, the way they recommend and prioritize content and a host of other process that are kept opaque to the outside world. In the absence of knowledge, it’s not surprising that speculation about malicious intent rushes to fill the gap.

TikTok’s risk profile is even more unknowable than most platforms’. The app is entirely driven by a recommendation algorithm which is a generation ahead of Facebook, Twitter, and Instagram’s scrolling news feeds. This “rogue chaotic algorithm” is what makes the app so engrossing, and it is a commercial secret. Users have seen tiktoks about the Hong Kong protests and the Uighur human rights abuses removed, but TikTok denies censorship. The risk of this, at the extreme, would be promotion of content that is specifically disruptive to elections or political institutions - but there is little way to assess that risk beyond conjecture.

Is TikTok a national security risk? Is Facebook doing enough to prevent the spread of misinformation and conspiracy? Do Twitter and Instagram have robust processes in place to take down anti-Semitism?

The answer to these questions is probably no, but users, civil society and governments have little way of coming to a good answer. So much of the operation and decision making, both human and machine, within social media is shielded from view, which is both a potentially huge risk to society and to the platforms themselves — as talk of bans and advertiser boycotts have demonstrated.

Transparency reports and oversight boards are the solutions pushed by social media platforms. While informative about the efforts taken to remove content, they do not shine a light on the true anatomy of harms online, and the processes to address them.

In the shadow of House Judiciary hearings on anti-trust, the new TikTok CEO announced this week that TikTok would be opening up its algorithm to regulatory scrutiny. This is a welcome step - but the terms of that openness will be critical.

A better solution would be to create a new tier of properly independent expert auditors, charged with scrutinizing the processes and action of platforms to tackle online harms. The auditors would be accountable to an independent regulator, like the one proposed in the UK, where new legislation is likely to impose a duty of care on social media platforms. This regulator would set the standard for the audit and could take action off the back of the findings.

Effective scrutiny means asking the right questions, and having the technical skills to keep up to date with highly innovative, ever-changing multinational organisations. Periodic audit would ensure there is an alternate source of informed expertise on how social media works, beyond the companies themselves.

Facebook’s recent Civil Rights Audit demonstrates this can be done effectively when there is enough access and support within the company. The author of the report, Laura W. Murphy, had “a partially dedicated team of 15+ employees across product, policy, and other functions”. This ought not to be a one-off, but a regular process — and a cost of business if you are a large social network.

This would be good for the platforms. It would allow them to demonstrate the steps they have taken, have them independently verified, and put a free-speech firewall between them and direct government management of their services. For advertisers, it would give reassurance that their ad dollars aren’t funding hate. For the public, it would protect them, by creating the right incentives in the platforms to have proper process in place to reduce harm. For civil society groups, it would give them information and data on where the risks are. For governments, it would allow them to properly assess national security risks from domestic and foreign threats.

There should not necessarily be radical transparency to the public. Online platforms are engines of both society and the consumer economy, so there is an advantage for bad actors or for businesses to game the algorithm to get an edge. But there should be an independent third party with access to confidential business information, which can verify and assess the public statements of action taken by social media companies, incentivize the creation of new robust process and reveal blind spots from a position of knowledge.

The 13th century Persian-Tajik poet Ibn Yami described four types of men with different levels of awareness of the world. In a world of social media, we are all his fourth kind — the “one who does not know and does not know that he does not know”, who Yami says “will be eternally lost in his hopeless oblivion”. We don’t know enough about TikTok or other social media companies to know whether we’re in hopeless oblivion. We do know that TikTok is already enhancing western culture and bringing joy to millions, as did and do Facebook and Twitter. To get back on the right path, we need proper well-considered regulation and independent scrutiny.

— — —

Full policy reports and analysis available here:

--

--

Max Beverton-Palmer
Digital Diplomacy

Head of Tech & Society @InstituteGC | https://institute.global/tech | Love a debate about tech, social justice & culture | views are my own | him/he