Tech regulation may be coming, but that’s not the end of the story. Campaigners must keep up the pressure

Reset Narratives
Inter-Narratives
Published in
5 min readAug 4, 2022

By Imran Ahmed, CEO and Founder of the Centre for Countering Digital Hate

Over the past two decades, digital communications technologies have transformed our lives, allowing us to communicate unimaginably quickly and cheaply, at an enormous scale.

In doing so, a small coterie of billionaires in Silicon Valley have become de facto gatekeepers of how we share information; create community; establish norms of attitude and behaviour; negotiate our society’s values; and even decide what is accepted as “facts”.

These billionaires not only own and administer those platforms, they alone get to set and enforce the rules. But what is truly telling is not so much the rules, which are designed to reassure users and appease critics, as it is their enforcement.

Try posting a commercially copyrighted music video on YouTube and it will be taken down incredibly quickly by technology that searches constantly for patterns in audio and video. Try posting a nipple on Facebook and image-recognition algorithms will identify and censor it.

But while you can’t find a lot of nipples on Facebook, you sure can find a lot of Nazis.

I founded and launched CCDH in 2018, in response to an unprecedented rise in online hate, extremism, disinformation, and conspiracy theories — which are responsible for heightened polarization and social division. In recent years, we’ve witnessed events unimaginable just a few years ago, including the assassination of sitting politicians and attempts to overturn free and fair elections.

Hate, abuse, and disinformation have proliferated on platforms because owners turn a blind eye. Mass shootings, white supremacy, social injustice, anti-science misinformation, normalised climate change denial are the results of enforcement choices made by Big Tech. The result is that disinformation and hate is more visible, present and normalised in our lives, dividing us, decaying our information ecosystem and undermining democracy, human progress and collective action.

We know it, and, as Facebook whistleblower Frances Haugen revealed, Big Tech does too. Indeed, her greatest revelation was the extent to which they had sought to conceal rather than reduce or mitigate harms, and their failure to deal with known offenders.

Without rules being enforced, social media is a phenomenally potent medium for those disseminating hate and lies, allowing them to reach billions of people at zero marginal cost for each additional user and message.

Last year, we published The Disinformation Dozen, showing that 12 sophisticated and highly active superspreaders were responsible for 65% of anti-vaccine misinformation circulating on Facebook and Twitter at the time.

Snake-oil salesmen have been around forever. But social media platforms give them unprecedented access to billions of social media users, allowing them to flood platforms with sensationalist clickbait that is designed to tempt the susceptible into marketing funnels.

The most effective and efficient way to stop malignant activity, such as the dissemination of disinformation, is to impose consequences — for example, by removing offending content and banning repeat offenders. So why don’t they do this, even when their internal research confirms that a small number of bad actors wreak a disproportionate amount of harm?

It comes down to economic incentives. Enforcement of the rules costs money to do well, and actively removing content is antithetical to a business model that seeks to amplify and monetise the most controversial and ‘engaging’ content possible.

Social media isn’t a level playing field. They have one job: get users to stay on the platform for as long as possible so they can serve as many ads as possible. Platforms give us personalised, artificial “timelines”, that actually seek to highlight the most engaging content above all else. And, as numerous whistleblowers have revealed, platforms know that controversial, nasty, negative, harmful content tends to attract the most attention — playing on the emotions of those they aim to win over, as well as inducing reactions from opponents who signal their disgust and anger.

Across our research, we see this phenomenon repeated again and again. The systematic advantaging of hate and misinformation is exploited by radical extremists to draw disaffected young men down misogynist rabbit holes of hate and lies, and how millions of Americans were radicalised into believing the 2020 presidential election had been “stolen”. It’s why tens of thousands of people died in hospitals, begging their doctors for a vaccine that they had feared until it was too late.

When it is politically or economically to their advantage, Big Tech companies are all too happy to express sympathy with those who suffer the burden of their negligence. They claim to be Green. They claim to reject hate. They claim to support fundamental human and civil rights.

But years of research show the hollowness of their claims.

Our researchers have shown that climate denial is given carte blanche despite their rules. As is pro-Putin propaganda. We showed how Instagram turns a blind eye to hateful misogyny sent by Direct Message to women 9 out of 10 times. The same was true across all platforms of antisemitism (84% of reports of racist posts ignored), anti-Muslim hatred (89%), or racist abuse targeted at England’s Black footballers after the World Cup finals (94%). Our researchers have even shown how the next generation of social media technology — the so-called Metaverse — is already a haven for child abusers and groomers.

Platforms may publicly welcome scrutiny, but they fight against it at every turn behind the scenes, impeding scrutiny and spending big to resist regulation.

In one respect, what Big Tech does is predictable and understandable. It is their fiduciary duty to shareholders to evade responsibility and maximise earnings, no matter the broader impact on our societies. But we too can insist on safety, transparency, accountability and responsibility.

Legislation and regulation designed to curb harms is part of the solution but is not enough alone. All of civil society must keep up the pressure — social media affects vast swathes of our lives, and so all of us have a duty and a right to push back.

It is not dissimilar to the struggle on climate change. Technological progress comes with costs. The burden of these costs must be borne at least in part by those polluting our physical — and information — ecosystems, in order to disincentivise the production of harms and encourage less harmful solutions.

We know that, while we can research and advocate for reshaping digital forces, it’s only by working together that we can finally socialise social media.

Imran Ahmed is the founding Chief Executive of the Center for Countering Digital Hate. He is a recognised authority on the social and psychological dynamics of social media, as well as what goes wrong in those spaces — such as trolling, identity-based hate, misinformation, conspiracy theories, modern extremism, and fake news. Imran lives in Washington DC, and tweets at @imi_ahmed

--

--