The Online Harms White Paper: Tensions and Omissions

The UK government yesterday published its new Online Harms white paper. There is a long way to go before any legislation actually appears, but this is potentially the most significant development in the history of internet regulation in the UK.

The white paper is an extremely broad miscellany of problems, diagnoses, and recommendations. It’s difficult to see all of the proposals surviving intact by the time the government attempts to translate it into law.

The proposals exhibit a central tension between what the government terms “rules” and what it terms “norms.” The government says it wants to develop both “rules and norms” for the internet (p. 6) or, even more controversially, what it terms “the right norms and rules for the internet” (p. 25, my emphasis).

For governments, “rules” are usually more straightforward than “norms.” But the awkward concatenation in the white paper is a symptom of the current challenge of how to deal with what makes the internet so different from traditional print and broadcast media. The formulation “rules and norms” reveals a deep uncertainty about how far any government in a liberal democracy is prepared to go in regulating the behaviour of large numbers of its ordinary citizens, in contrast to the older— but still thorny — problem of how to regulate content produced by a small number of editorial gatekeepers in traditional media organizations. The formulation also masks the origins of many of the online harms that are currently tugging at the fabric of liberal democracy.

While rules are created by, and enforced through, law, the norms that matter online emerge and evolve through people’s actual behaviour when they interact with each other and the affordances of platforms. Governments might seek to introduce rules to encourage the emergence of certain norms, but there is no guarantee of success. This is because some of the most powerful norms contributing to the present crisis of public communication are shaped by the design of social media platforms. And these designs have been determined by the business models of the social media and online service companies themselves.

The white paper says that online norms should “discourage harmful behaviour” and that “citizens” should “understand the risks of online activity” and “challenge unacceptable behaviours.” But how will such changes in norms be achieved without altering the business models of the major online platforms and encouraging the development of alternative platforms based on different models?

The government defines the scope of online harms as involving “companies that allow users to share or discover user-generated content or interact with each other online” (p. 8). In an extraordinary phrase, the government admits that these services “are offered by a very wide range of companies of all sizes, including social media platforms, file hosting sites, public discussion forums, messaging services and search engines.” But while there are some brief mentions of the companies with the largest shares of these particular markets, it is odd that the white paper does not focus more clearly on the obvious problem: the concentration of too much power in too few hands.

Finally, it is early days, but at present the white paper contains an unnecessarily narrow view of the origins of disinformation. At many points, it reads as if disinformation spontaneously appears online or is only a product of foreign espionage.

In the UK context, we have a highly partisan mainstream press that skews right and has now adapted to the rhythms of social media. More generally, we are seeing political parties, campaign groups, and the political marketing industry rapidly embedding data-driven, online micro-targeting, often under the expert guidance of the social media platforms themselves, who value the business. And much of the false and misleading information now circulates through private, encrypted message services, such as WhatsApp.

All of this means that false and misleading information is often introduced by political and media actors of various kinds, for a variety of strategic reasons, before being shared across social media and private messaging by a wide range of individuals and organizations.

In a recent large-scale survey of 2,005 UK social media users representative of the adult population, conducted by the Online Civic Culture Centre (O3C) in collaboration with Opinium Research, more than half (57.7 percent) of British social media users told us that during the past month they came across news on social media that they thought was not fully accurate. But 42.8 percent of those who shared news on social media said that they themselves shared inaccurate news in the past month. And a startling 17.3 percent of those who shared news on social media said that they thought the news was made up when they shared it (Online Civic Culture Centre, Loughborough University, forthcoming).

The Online Harms white paper’s mention of education and digital literacy programmes to combat disinformation is to be welcomed but, as they stand, the government’s proposals contain few clues about what kinds of evidence might inform the development of such programmes.

A key focus should be the complex origins of false and misleading information— and the equally complex factors that lead social media users to share false and misleading information online.