China Weaponizes Algorithms for Censorship

Claire Talpey
Geek Culture
Published in
3 min readDec 9, 2022
Brady Bellini

Ask anyone whose job depends on social media what they think about algorithms and you’ll likely get a solid variety of negative answers. Want to promote your stream or YouTube channel and put a link into a tweet? It gets throttled and hidden from people’s feeds. Drew an interesting piece of art? It gets cropped and de-prioritized, only showing up if you scroll hard enough. But all of these grievances, though legitimate, don’t show the true danger of algorithms when it comes to deciding which content we get to see.

As you may know, China has recently had a massive wave of protests across the country, with citizens gathering to demand that COVID restrictions be eased. The protests gained widespread coverage and a lot of support worldwide, sparking a lot of discussions across social media platforms. Now, any sort of political topic or major event will draw spam and troll comments to it, though algorithms usually filter such things out. What happened with discussions of these protests is more interesting and more insidious. Let’s take a closer look.

Smoke Bombing the Debate

Bernard Hermant

Most people on the internet are familiar with bot farms — employed by organizations and governments to steer discussion away from negativity and topics that might overall be damaging to their image or profit. These groups of fake accounts usually operate to disrupt legitimate conversations, riling up the real commenters or dropping in disinformation, which then undermines the legitimacy of the whole debate.

In China’s case, though, a rather blunt tactic was used instead. Any posts bearing the relevant hashtags or names of Chinese cities where the protests were at their peak were brigaded with dozens of comments, all of them either spamming hashtags and links or writing nonsensical word chains in English — generating interactions, which made algorithms take notice of the original posts, and, at the same time, marking these posts as hotbeds of spam and bot activity, which made algorithms suppress them.

It’s not exactly a new style of attack but it’s still one that platforms don’t deal with that well. Automatic moderation on Facebook, Twitter and Reddit is more likely to simply block the post in question from showing up or hide the replies, which would lead users to question why the number kept rising but no discussion was visible — fueling conspiracy theories and, again, distracting from the original topic.

What makes this semi-successful campaign truly bad is that it can be confidently traced back to the ones who ordered it — the Chinese government.

Algorithm vs. Country

Li Yang

Recorded Future, a cybersecurity firm that specializes in threat intelligence — or, in simpler terms, studies cyberattacks such as these — has published a report that ties these spamming attacks back to the CCP. Now, this makes sense and it’s not like it’s the first attack of its kind to be tentatively connected to someone at a state level.

Rather, it’s the combination of factors that makes this disturbing: the attack was ordered by a very powerful entity, it was rather successful for its scale, it gamed the algorithms in ways that platforms are still not protected against, it proved yet again that China’s capacity for censorship is above anything we’re used to in the Western parts of the internet.

Moreover, the likelihood of this method now being used by more parties, disrupting conversations that make some uncomfortable, is pretty high. The only way to make sure it doesn’t happen again is to completely change the algorithms that rule social media, which isn’t something that platforms are interested in. It’s a tough situation and it will be interesting to watch this stand-off develop, because it may well impact issues that we all care about.

--

--

Claire Talpey
Geek Culture

Tech news and opinions. No fence-sitting, no overcomplicating things. Let’s get everyone knowledgeable in tech.