In the science-fiction film, Minority Report, murder has been eradicated from society through a new kind of policing that can predict and prevent actions before they happen. In a way, this powerful fantasy of controlling the future has always been present in government policies, which have to be in part directed towards this uncertain future. This makes sense. One does not get into power with the promise of what already is — but rather by promising something better, something different, to come. Politics has been always entangled with different dreams and nightmares of future.
When we look at debates on online extremism, we can find similar political astrology at work. For instance, in his analysis of the policies adopted by the UK government to counter radicalisation (aptly titled “Prevent”), Martin (2014) writes that it is “an ambition of government that demands action at ever-greater temporal remove from the danger it seeks to mediate … Prevent targets the potential future terrorist” (62). What such mechanisms thus try to do is control risks posed by an uncertain future, or, at the least, “feign control over the uncontrollable — in politics, law, science, technology, economy and everyday life (Beck 2002: 41)” Aradau and Munster (2008) similarly write that
what is new is not so much the advent of an uncontrollable ? society as the emergence of a ‘‘precautionary’’ element that has given birth to new rationalities of government that require that the catastrophic prospects of the future be tamed and managed. In conjunction with a neoliberal rationality of risk, the dispositif of precautionary risk creates convergent effects of depoliticization and dedemocratization … the identification of risk is not the same as recognizing the uncertainty of future events. On the contrary, the identification and management of risk is a way of organizing reality, taming the future, disciplining chance and rationalizing individual conduct (Hacking 1990). Identifying the future as bearing catastrophic risks is therefore linked with visions of order and ways to constitute and reproduce it (24–25).
Yet as we have seen again and again this remains an impossible task. The future is and will remain open. If the world’s intelligence and resources have been for a long time dedicated to preventing unwanted events from taking place — and they still happen — something else must be going on here.
It is never the car that you see that hits you when you cross the road.
It is the one you cannot see.
CRACKS IN THE CRYSTAL BALL
We are organising a panel on Extreme Speech and Global Digital Cultures at the European Association of Social Anthropologist conference in Milan in July. As I will participate in this panel, I was thinking about what I could talk about.
In a way, if you think about it, what researchers do is not that different from readings palms, gazing into crystal balls, rummaging through coffee grinds or interpreting fish entrails. Researching the “dark” side of digital cultures is always part-theory-part-astrology: what we hope to achieve is to understand what the future significance of something we observe today is so that some measures could be taken to prevent unwanted outcomes. Why this need to understand extremist speech online? Well, we really don’t want our youngsters to become radicalised and commit violent acts in the future? Why the need to do something about the increase of racist hate speech in the social media? Well, we don’t want our democracies to be steamrolled by political polarisation leading to the demise of deliberative democratic discussions, do we? My abstract for the conference reads the following:
Debates on digital cultures are increasingly turning to its “dark side”: to the many risks associated with online and social media behavior. The risks associated with the dark side of internet freedoms, however, mostly exist in the horizon of our unknowable futures: in the potential terrorist attack that was facilitated by them; in the youngsters that might be radicalized by them; or in the polarisation of conflict that the proliferation of hate speech can bring about. A new “dispositif” of risk, scholars have argued, has thus emerged through which these imagined dangers are now contained and controlled. Indeed many of the political, legal and technological mechanisms adopted have been designed to predict and prevent these future dangers: early warning systems, surveillance and censorship, predictive policing, monitoring of online radicalization, forcing internet intermediaries to remove speech that could be considered hateful and offensive.
This debate linking contemporary digital cultures with their imagined risks, however, has been largely inflected by a Euro-American discourse on war on terror and its particular understanding of risk. This paper hopes to broaden the debate by providing a comparative perspective to extreme speech in global digital cultures and methods proposed to counter it. Based on research on hate speech in the social media in Ethiopia and the EU, the paper proposes a more situated perspective that takes into account the specific cultures of communication, political context, and media practices involved in the production of extreme speech as well as mechanisms proposed to counter and control it.
There are two additional implications that came about from all this gazing into the crystal ball. The first is that how we imagine this future is not a neutral process. Rather, it changes over time and is linked to different political constellations and cultural contexts in different parts of the world. As Aradau and Muster write this “dispositif of risk is subject to transformation and modification, depending on the knowledgeable representations of the problems and objects to be governed and on the available technologies to produce particular effects in the governed (25).” The research programs we are involved with, thus, cannot be seen as separate from the politics of how the future is imagined and potentially controlled. What is the role the internet has in radicalisation? How could we create indicators for an early warning system through which the risk of violent offence could be determined? There is a reason why big data is one of the most popular research methods aimed at understanding (or controlling) online behaviour.
The second implication, however, is that — despite all this gloom and doom — the future is still open. In other words, instead of seeing the future only as a risk that needs to be controlled, perhaps we should start seeing the future once again also as a horizon of possibility through which different kinds of futures can be envisioned. For instance, the field of futures studies has developed many interesting ways of working with this unpredictability of the future. One of my favourite futurists, Ziauddin Sardar, writes about one of them called backcasting:
The purpose of backcasting is to provide “policy makers and an interested general public with images of the future as a background for opinion forming and decisions” in the hope that “new knowledge and new ideas may lead to the identification of some entirely new options.” Unlike forecasting, which analyses trends and looks towards the future, backcasting uses normative future visions to provide the necessary strategy for its realisation … as such, backcasting allows us to move away from current trends and create desirable futures that are fundamentally different from the current conditions.
Backcasting as a method also consists of creating debates among different groups about what this future we would like to unfold could look like, and what steps would be needed to get there.
A Facebook group was set up in Finland recently where “anti-racists” and the “anti-immigrants” — usually mostly involved in vitriolic abuse against each other — try carry out a “civilised” conversation instead of trolling each other. A big part of this experiment is to ask questions from the other group. There have of course been a few overkills on this page but, whatever its outcome, it got me thinking. Gazing into my crystal ball, maybe one thing we should be now doing about the dark side of internet freedoms is also to start having dialogues where the kinds of future we would like to see would be debated?
Rather than complain about what is wrong and worry about the risks, perhaps we should start once again imagining the future we would like to actually end up in and see how (and if) we could still get there.
Aradau, C. and Muster, R. (2008) “Taming the Future: the dispositif in the War on Terror” in Risk and the War on Terror (eds. Amoore, L and Goede, M). New York: Routeledge
Beck, U. (2002) “The Terrorist Threat: World Risk Society Revisited”, Theory, Culture & Society 19 (4): 39–55.
Thomas Martin (2014) “Governing an unknowable future: the politics of Britain’s Prevent policy”, Critical Studies on Terrorism, 7:1, 62–78
Sardar, Z. (2013) Future: All that Matters. London: Hodder & Stoughton.