DIGITAL LITERACY VS THE ANTI-HUMAN MACHINE: A PROXY DEBATE FOR OUR TIMES?

Huw Davies
13 min readJun 14, 2019

--

(For the Vox Pol report that asks is digital literacy a solution to hate speech, extremism, and associated activities online?)

INTRODUCTION

Sir Tim Berners-Lee, whose HTTP protocol enabled the Web, has been recently quoted as saying, “we have demonstrated that the web has failed”. Instead of “serving humanity”, he said, we now have a “large-scale emergent phenomenon, which is anti-human” (Brooker, 2018; n.p.). Given that he has previously argued that the Web was a humanistic artefact of the Enlightenment (Berners-Lee et al., 2006) — a “social machine” (Berners-Lee & Fischetti 1999; 172) — we can assume that by ‘anti-human’ Berners-Lee means the Web has become a place where tolerance, rational thought, and scientific epistemologies that promote human progress have been overwhelmed by their binary opposites.

This volume is about extreme digital speech but it is difficult to isolate its exponents from the rest of the anti-human machine. Extreme speech is often codified so that group insiders know what is being said but outsiders cannot identify it as illegal. This includes visual codes in memes and the co-opting of well-known brands to produce a whole iconographic subculture of extremist thought (Miller-Idriss, 2018). Extreme speech is also a product of conspiracy theories: people think their views are justified because they are fighting a malevolent hidden power. Extreme speech has to be contextualised within the whole ecology of digital media within which the boundaries of what is acceptable, questioning the numbers murdered in the Holocaust for example, prepare the territory in advance for extreme speech to flourish. So while only some definitions of digital literacy address extreme speech specifically, such as Vaikutytė-Paškauskė, Vaičiukynaitė, & Pocius (2018), because they can’t be easily disaggregated from extreme speech, the many techniques and strategies of the anti-human machine, such as disinformation campaigns, are included here within the same equation.

The anti-human machine is a powerful combination of ‘the social’ (human actors) and ‘the technical’ (the affordances of digital technologies). The anti-human machine is placing demands on democracies to address ideologies and methods that are undermining their ability to function. In this essay I explain what the anti-human machine is, how it functions, and why responses to its effects are eluding the Web’s big platforms (Facebook, Twitter, Google, etc.) and confounding national governments. I then show that, although none of its advocates claim it is a unilateral panacea, digital literacy is being offered as an alternative or additional remedy to the anti-human machine (see, for example, European Commission (2019)). However, existing digital literacy solutions, particularly in the UK, are inadequate responses to the challenges our democracies now face. Calls for digital literacy have a long way to go before they catch-up with political reality.

There are many ways that technologies have been exploited in service of the anti-human machine. Photoshopped images; forged and redacted videos including so-called ‘deep fakes’ (using simulation technology to create new synthetic content from existing material, for example, to confect a politician’s speech); the deliberate misreporting of events on partisan ‘news’ platforms; magazines and media outlets that cite pseudo-history, pseudo-science, junk research (research published in low-quality journals that have no rigorous peer review process) and misrepresent genuine research; the use of ‘bots’ (both automated accounts and humans behaving as bots) to overwhelm social media feeds via comments, shares, or replies; disinformation campaigns; personalised, targeted ads with undisclosed funders; recommender systems that disseminate prejudice and propaganda; and many other techniques that are used to leverage network effects to nourish ideologies that produce hate and extremist speech are all now in play. People exploiting technology can be groups of individuals who show each other online that they share grievances or affiliations. They can participate in more organised or co-ordinated groups that have a history offline and have been rejuvenated by the Web, such as Stormfront. They may be paid or motivated by state actors and/or may act within swarms that come together to respond to events, memes, postings or tweets only to disperse before the next event (Ganesh, Chapter 2 in this volume). Or they can be part of imagined communities that involve temporary, contingent, and messy coalitions of all the above. What can be done about this socio-technical threat?

HOW THE ANTI-HUMAN MACHINE IS RESISTING THE PLATFORM INTERVENTION

The major platforms have had to become quasi-governmental gatekeepers of public discourse. After a series of scandals however (such as Facebook hosting the live feed of the recent gun massacre in Christchurch, New Zealand), their failure to protect their users from extremism has drawn critical attention worldwide. This criticism has intensified because the platforms’ gatekeeping systems lack transparency; when they do apply their corporate policies their decisions to remove content or groups (such as Britain First) can often appear arbitrary. As Crosset (this volume] shows, platforms apply the rules inconsistently depending on the country within which they are operating and even their own employees repsonsible for content mediation find guildlines confusing and contradictory. Aside from their terms and conditions, there is no publicly available methodology to follow their adjudications about what content they find unacceptable and why; precedents are set then violated, and there is neither an effective way to challenge their decisions to nor a process to hold them to account.

Platforms are also struggling to accommodate the wider political context within which hate or extremist speech can be codified and normalised by mainstream populist politicians. So, if they block an extremist, we may legitimately ask why don’t the platforms block the mainstream politicians who are amplifying this extremist’s message? Given that the major platforms have billions of users continually adding content, even if they had a transparent and publicly accountable methodology for removing violations of their terms and conditions, they still have to rely on algorithms to identify breaches. At such a scale of content generation, even a 1% failure rate lets through too many transgressions for platforms to be able to deal with manually (see Gallacher, this volume). Moreover, once it is possible to understand the logic of these algorithms, they can be gamed (to an algorithm, depictions of violence could, for example, be made to look like admissible video game footage).

HOW THE ANTI-HUMAN MACHINE IS DEFEATING GOVERNMENT INTERVENTION

The alternative to the self-regulation for platforms is government intervention. Civil rights lobbyists are anxious about handing over this power to decide what is acceptable on platforms to governments because it can easily be abused. Libertarian groups argue governments removing content is censorship that violates our right to free speech. If such decisions are handed over to the public, in today’s political climate, reaching a democratic consensus about what is acceptable to censor on platform will be challenging if not impossible. If an agreement is accomplished, how do we prevent majoritarianism violating the rights of vulnerable or marginalised groups? If platforms are unwilling or unable to follow guidelines that emerge from this consensus how do governments enforce them?

The European Commission is currently formulating legislation to give its member states the power to compel platforms to remove extremist content and hate speech and fine them up to 4% of their global revenue if they fail to comply. But it remains to be seen what happens if the platforms refuse to pay these fines or divert some of their billion-dollar reserves and profits to financing legal teams to challenge any rulings in expensive and protracted court proceedings. Therefore, beyond government intervention, fixing the platforms’ users through educational programmes to prevent digital technologies being used in anti-human ways, appears to be relatively attractive.

THE DIGITAL LITERACY SOLUTION

Digital literacy has a long and complicated genealogy that includes information, computer, and media literacy (see Nichols & Stornaiuolo (2019) for a full discussion). In England, digital literacy is currently delivered to school-aged children through the national curriculum in computer science (UK Parliament Science & Technology Select Committee, 2016). The curriculum focuses on “up-skilling” target populations, equipping them for a “21st century jobs market”, making liberal use of verbs such as “thrive” and “participate” at a “level suitable for the future workplace and as active participants in a digital world” (Department of Education, 2013; n.p.).

However, recent discussions within government show many stakeholders believe this form of digital literacy is no longer adequate. For example, the 5Rights Framework cited by the UK’s House of Lords Communications Select Committee (2017) on digital skills states that, via schools, digital literacy should help children and young people “critically understand the structures and syntax of the digital world”, “to be confident in managing new social norms” and understand “issues of data harvesting and impact of persuasive design strategies” (Kidron, Evans, & Afia, 2018; n.p.). And, following a report from the National Literacy Trust, the recent UK All-Party Parliamentary Group on Literacy stressed that children and young people need to be taught the “critical literacy skills” to identify “fake news” (National Literacy Trust, 2018; n.p.). After its investigation into disinformation and fake news, the UK government’s Department for Media, Culture and Sport (DCMS) went further concluding, “digital literacy should be the fourth pillar of education, alongside reading, writing and maths” and delivered “as part of the Physical, Social, Health and Economic curriculum (PSHE)”.

There is another discussion we should be having about establishing an evidence base for digital literacy, including who to target and why (especially within a profoundly unequal educational system) and what to do about people who are unwilling or unable to access formal education programmes. However, I am focussing here on the mismatch between the digital literacy solution as proposed in the policy circles described above and the challenge of the anti-human machine. While research into this area is nascent, anti-human users Web and Internet in anti-human are in many ways already digitally literate. They have an acute understanding of the “syntax of the digital world” and “persuasive design strategies”. Indeed, these strategies, together with effective reputation management, and, to further quote the 5Rights Framework, “the confidence to manage new social norms” (Kidron, Evans, & Afia, 2018; n.p.), have enabled extremists to reach and mobilise a wider, global audience online. Such malign actors have developed techniques of attention hacking to increase the visibility of their ideas through the strategic use of social media, memes, and automated bots — as well as by targeting journalists, bloggers, and influencers to help disseminate content (Marwick & Lewis, 2017).

As already mentioned, many extremists, who know their views are normatively transgressive, offensive or illegal, have adapted to codify their language and normalise their discourse by successfully crossing the boundaries between marginal and mainstream media, including effectively manipulating the affordances and weaknesses of the platforms. Stormfront’s ‘style guide’, for example, is “particularly interested in ways to lend the site’s hyperbolic racial invective a facade of credibility and good faith” (Feinburg, 2017; n.p.). Anti-Semitism, white supremacism, Islamophobia, and misogyny are often perpetuated through irony and an intimate knowledge of Internet culture (ibid). Jihadists such as Islamic State have also successfully exploited social media platforms; often by mimicking the production techniques and action tropes of Hollywood blockbusters, and they have even produced jihadist computer games (Atwan, 2019).

The antidote to this may be more critical thinking but prominent members of the ‘intellectual dark web’ community have consciously co-opted the norms and language of academia into their strategies to create a parallel criticality. Critical thinking now means profoundly different things to different people. Adherents of the ‘intellectual dark’ web already believe they have reached the apogee of their form of critical thinking about the digital and broadcast media, politics, and science. It is, perhaps, telling that sociologist Bruno Latour has recently been reflecting ruefully on his critique of science (Vrieze, 2017) because, in its disingenuous application, it has empowered conspiracy theorists and climate change deniers who argue the scientific research on climate change is politically motivated and compromised by its ‘biased’ sources of funding. The digital ecosystem within which all these ideologically affiliated users and groups operate on websites such as Reddit, 4Chan and 8Chan provide learning opportunities for young people who are drawn to these figures and their ideas.

In response, Emejulu & McGregor (2016) argue digital literacy must confront the politics of the anti-human machine head on. They therefore locate digital education within the “wider discursive and material struggles for equality and social justice” (ibid; 3) that actively pushes back against reactionary ideologies. However, there is an obvious danger that such forms of digital literacy, that includes online activism, may produce even more polarised and mutually energising antithetical crowds fighting over what is permissible online. We need to ask, how does such a form of digital literacy become part of the solution and not simply another casualty of the so-called online culture wars (boyd, 2018)? Given they monetise engagement through advertising and antagonism boosts engagement, in the culture wars the platforms are the only winners. Politically engaged users are also sharing sensitive personal data about their views on public platforms for commercial and governmental surveillance agencies to capture.

There is nothing in any existing calls for digital literacy in the policy circles above that addresses the bigger picture here, which is the co-constitutive relationship between psychological mechanisms such as confirmation bias, motivated reasoning, and cultural cognition; the heuristics and techniques we use to confirm our in-group status including aggression towards the Other; the evolving social norms that define how we engage with each other online; the history and ideologies of racism and misogyny including their theological origins, the tactics of populism and extremism; and the ideologies and political economy of platform capitalism; and the deliberate exploitation of ignorance of all of the above.

This means the problem is much bigger than skills. Many of us are not interested in fact-checking or the effort of critical thinking if we are rewarded for being seen to be endorsing disinformation on social media. This produces correlations as people align their views across unrelated domains to conform to the prejudices of their ideological in-group. For example, high levels of racial resentment are strongly correlated with reduced agreement with the scientific consensus on climate change (Benegal, 2018). As a result, we live in a society that is often grossly misinformed as very few people go away to check the facts on emotive issues. For example, we hugely over-estimate the proportion of Muslims in Britain — we think 21% British people are Muslims when the actual figure is 5% and we think 24% of the population are immigrants — which is nearly twice the real figure of 13% (Ipsos MORI, 2017).

Therefore no digital literacy programme is ever likely to work unless it produces reflexive critical thinkers, motivated to challenge their own thinking and postionality: people know and care when they are being sold a biased or racist view of history, pseudo-science or when they are being manipulated. As boyd (2018) identifies digital literacy to needs to be about epistemology: how do we know what the facts are and where do we go to find them. It also needs to be about the methods and methodologies support our epistemologies as the key to thinking for ourselves, understanding claims and validating knowledge without having to rely to heuristics such as appeals to authority.

It is therfore easy to fall into the trap of digital literacy inflation where we call for evermore-sophisticated forms of digital literacy that eventually become a whole multidisciplinary curriculum that we call education. However, the values and practices such that should be the foundation for such an education are now being framed within anti-human machine (and beyond) as those of an ideologically perverse, smug, self-serving, distant, liberal or left wing academic elite unable or unwilling to address the concerns of ‘the people’ including what Kaufman (2018) calls “white ethnic loss” (n.p.). Given how the Web, with the techniques described above, is being used to undermine social cohesion and our collective capacity to address climate breakdown, digital literacy has become site for a proxy debate for one the most important challenges of our time: how do we rescue knowledge from the anti human machine?

REFERENCES

Atwan, A. B. (2019) Islamic State: The Digital Caliphate. University of California Press.

Benegal, S. D. (2018) The spillover of race and racial attitudes into public opinion about climate change, Environmental Politics, 27:4, 733–756, DOI: 10.1080/09644016.2018.1457287

Berners-Lee, T., Hall, W., Hendler, J. A., O’Hara, K., Shadbolt, N., & Weitzner, D. J. (2006). Foundations and Trends in Web Science Vol. 1, No 1 (2006) 1–130

Berners-Lee & Fischetti (1999) Weaving the Web. Orion Business. London

Boyd, D. (2018) You Think You Want Media Literacy… Do You? Retrieved from: https://points.datasociety.net/you-think-you-want-media-literacy-do-you-7cad6af18ec2

Brooker, K. (2018). “I was devastated: Tim Berners-Lee, the man who created the Web, has some regrets.” Vanity Fair. Retrieved from: https://www.vanityfair.com/news/2018/07/the-man-who-created-the-world-wide-web-has-some-regrets

Ganesh, B. (2019) Right-wing extremist and digital speech in Europe and North America. This volume.

Emejulu, A., & McGregor, C. (2016). Towards a radical digital citizenship in digital education. Critical Studies in Education, 8487(November), 1–17. Retrieved from: http://doi.org/10.1080/17508487.2016.1234494

Feinburg A (2017) This Is The Daily Stormer’s Playbook. The Huffington Post. Retrieved from: https://www.huffingtonpost.co.uk/entry/daily-stormer-nazi-style-guide_n_5a2ece19e4b0ce3b344492f2?ncid=other_email_o63gt2jcad4&utm_campaign=share_email&guccounter=1

Guess, A., Nagler, J. & Tucker, J. (2019). Less than you think: Prevalence and predictors of fake news dissemination on Facebook. Science Advances, 5(1). Retrieved from: http://advances.sciencemag.org/content/5/1/eaau4586.

House of Lords Select Committee (2017). Interactions with the digital world. Retrieved from: https://publications.parliament.uk/pa/ld201617/ldselect/ldcomuni/130/13007.htm

House of Commons Technology and Science Select Committee. (2016). Digital skills in schools. Retrieved from

https://publications.parliament.uk/pa/cm201617/cmselect/cmsctech/270/27006.htm

IPOS Mori (2017) Perils of Perception. Retrieved from: https://www.ipsos.com/ipsos-mori/en-uk/perils-perception-2017

Kaufman, E. (2018) “White majorities feel threatened in an age of mass migration — and calling them racist won’t help.” Retrieved from

https://www.newstatesman.com/politics/uk/2018/10/white-majorities-feel-threatened-age-mass-migration-and-calling-them-racist-won

Kidron, B., Evans, A., & Afia, J. (2018). Disrupted Childhood The Cost of Persuasive Design. Retrieved from : https://5rightsframework.com/static/5Rights-Disrupted-Childhood.pdf

Marwick, A. Lewis, B. (2017). Media Manipulation and Disinformation Online. Data and Society. Retrieved from:

https://datasociety.net/pubs/oh/DataAndSociety_MediaManipulationAndDisinformationOnline.pdf

Miller-Idriss, C. (2018) “What makes a symbol far right? Co-opted and missed meanings in far right iconography.” In: Maik Fielitz/Nick Thurston (Eds.), Post-Digital Cultures of the Far Right (123 136).https://doi.org/10.14361/9783839446706-009

Nichols P. & Stornaiuolo A. (2019). Media and Communication (ISSN: 2183–2439) 2019, Volume 7, Issue 2, Pages 14–24 DOI: 10.17645/mac.v7i2.1946

The Department for Education (2013). National Curriculum in England: Computer Science Programmes of Study. Retrieved from: https://www.gov.uk/government/publications/national-curriculum-in-england-science-programmes-of-study

The Department for Media Culture and Sport (2019). Disinformation and ‘fake news’: Interim Report. Retrieved from:

https://publications.parliament.uk/pa/cm201719/cmselect/cmcumeds/363/36310.htm#_idTextAnchor061

The European Commission (2019). European Media Literacy Event. Internetkunskap (Internet Knowledge) — Media Literacy and Digital Skills for Adult Citizens. Retrieved from:

https://ec.europa.eu/futurium/en/european-media-literacy-events/internetkunskap-internet-knowledge-media-literacy-and-digital-skills

The National Literacy Trust (2018). Fake news and critical literacy: final report. Retrieved from:

https://literacytrust.org.uk/documents/1722/Fake_news_and_critical_literacy_-_final_report.pdf

Vaikutytė-PaškauskėJ., VaičiukynaitėJ., & Pocius D. (2018). Research for CULT Committee — Digital Skills in the 21st century, European Parliament, Policy Department for Structural and Cohesion Policies, Brussels. Retrieved from:

http://www.europarl.europa.eu/RegData/etudes/STUD/2018/617495/IPOL_STU(2018)617495_EN.pdf

Vrieze, J. (2017). Bruno Latour, a veteran of the ‘science wars,’ has a new mission. Retrieved from:

http://www.sciencemag.org/news/2017/10/bruno-latour-veteran-science-wars-has-new-mission

Zuckerberg, M. (2016). Retrieved from:

https://www.facebook.com/zuck/posts/10102830259184701&xid=17259,15700023,15700124,15700149,15700168,15700173,15700186,15700191,15700201

--

--

Huw Davies

#digitalsociology researcher at the Oxford Internet Institute