AI after the pandemic

Daniel McQuillan
13 min readMay 12, 2020

--

A reflection on necropolitical neural networks and the need to develop ‘knowing-caring’.

tech solutionism

The Covid-19 pandemic has been marked by computer modelling and tech solutionism as much as it has by a global lockdown. Tech solutionism is the term for proposals like proximity tracking apps and digital immunity passports; the substitution of advanced technology for the proper resourcing of epidemiological responses or open political debate about state priorities. Never mind that fluctuating Bluetooth signals, for example, are a very poor proxy for viral exposure. Technological innovation does the job of diverting attention from questions about underlying material and structural conditions.

The tools to hand for modern states are the infrastructures of surveillance and tracking that already pervade daily life, from smartphones to social media. The pandemic has transformed the tricky balance between commercial surveillance and customer unease. Where corporations previously tried to play down their data collection and Cambridge Analytica was a scandal, tech giants can now offer surveillance as a public service. Companies like Palantir with dubious track records are suddenly in open partnership with national health services and facial recognition startups re-purpose their tech to do distant readings of your body temperature. And yet these same extractive data logics underpin the wider structures of outsourcing, privatisation and precarity that have left societies under-prepared for the pandemic itself.

The overall pandemic response is set within a logic of computational modelling and behavioural modification. The imperceptible multiplication of SARS-CoV-2 in our cells conspires with data science to produce anticipatory governance, where numerical projections of the future become the rationale for state actions in the present moment. It’s important, in the midst of grief for our losses, not to miss the significance of a governmentality based on algorithmic prediction and preemption. Like surveillance, it was already present prior to the pandemic and is set to become a dominating feature of post-pandemic society, in particular through AI.

AI, meaning the technology of machine learning and neural networks, will become predominant in a post-pandemic society, not least because its core operation is the prediction of risks at scale. All actual AI is a form of machine learning, a set of computational methods that learn from data; the more data there is the better they get. The algorithms of machine learning adapt statistical methods for probabilistic pattern finding and classification. Their power is their generalisability; given the right supply of labelled data, they can be equally applied to predict which cell growth will become cancerous or which customer is likely to make a repeat purchase.

AI works through reductive abstraction and optimisation. Aspects of the world are transformed to vectors of numbers between zero and one and used to calculate a mathematical distance between the algorithm’s predictions and labelled target data. This is so-called loss function is painstakingly minimised through a massive number of iterative calculations. The results can be uncanny; AI can recognise faces with greater accuracy than people and drive cars on the open road. But at heart it is mathematical pattern-guessing, achieved by rendering diverse aspects of the world commensurable such that they can be statistically traded against each other. Moreover, advanced AI is highly opaque exactly because of these complex calculations, and it is impossible to directly interpret its judgements in terms of human reasoning.

At the same time that AI is high tech hyper-abstraction, it is curiously dependent on invisible labour. The data sets it needs to learn from are typically labelled by poorly paid click-workers, who are frequently women from the global south. This workforce is itself assembled algorithmically, via online crowdsourcing platforms. AI is a part of a global pattern of racialised, gendered and invisibilised labour practices.

The important point about AI is that is it not aimed at understanding but at intervening. Unlike ordinary science, it doesn’t produce probabilities as a way to test an underlying theory but as a way to enable preemption. The purpose of YouTube’s algorithm, for example, is to present you with a next video that you are most likely to click on, not to ask why there’s a high probability of you taking that action (let alone whether there might be a link to any factors like self-harm or growing radicalisation). The mathematical optimisations of AI are utilitarian and instrumentalist.

AI’s predictions become most problematic when applied to people and to social problems. They are inferential classifications based on ‘people like you’ — so not only do they reproduce data bias, but they are inherently a form of stereotyping. Applying these calculative logics across society will inevitably have an asymmetric impact, as of forms of classification and ranking are inseparable from questions of power. The orderings of AI will become forms of segregation leading to continuous partial states of exception, whether that is the denial of cheap car insurance or being prevented from working based on predicted infection factors. These divisive operations of AI will act as an additional downward pressure on the existing social fractures that have been so starkly highlighted by Covid-19.

The allegiance of post-pandemic states to anticipatory governance will only boost the hubris of AI. In the eyes of many, its number crunching ability to convert any kind of data to optimised predictions has no limits. Even before the pandemic, ideas that deep learning could deliver better healthcare than most doctors or better cancer detection than most radiologists were already widely promoted. Machine learning was already being deployed to predict which job applicants would have a successful career or which parents would go on to abuse their children. This is despite its demonstrated fragility, where shifts in the underlying data produce unexpected failure modes and adversarial examples. Prior to Covid-19, the opaque predictions of AI were already being lined up for tricky social interventions and to manage austerity. A post-pandemic society of risk and debt will supercharge this automatising algorithmic solutionism, under the banner of continued neoliberal optimisation.

We are clearly not all in this pandemic together. Whereas one of the vectors for the rapid spread of the virus was international business flights, the most vulnerable include the very care workers who are holding the show together. It’s society’s most vulnerable, those who can least afford to isolate, who are hit hardest. In the UK the death toll in the most deprived areas is double that in the wealthiest, while for black and ethnic minorities it’s up to four times that of the white population. But AI and other technologies of computational prediction, which will be heralded as ways to manage post-pandemic society and the coming climate disruption, are also engines for intensifying those inequalities. They are made for targeting, and lend themselves more to rationing and scarcity rather than to raising up whole communities. AI’s algorithms are means of stratification, in the long lineage of bureaucratic and statistical methods deployed by institutionalised power.

AI’s function is to discriminate in a technical sense, which maps to a social role of distinguishing between the deserving and the undeserving. In post-pandemic society, therefore, AI becomes fully necropolitical. That is, part of a wider apparatus of governance that is involved in ‘letting die’, where that serves overall goals. In pre-pandemic neoliberalism it was refugees trying to cross the Mediterranean, or disabled people on benefits, who were subject to systemic neglect up to the point of death, and at the height of the initial Covid-19 outbreak it was older people in care homes and care workers themselves. In the ‘forever pandemic’ that will follow, the machinations of computational learning will continue to act both as political obfuscation and engines of systemic neglect.

people’s councils

With the coming of Covid-19, however, there’s been a collective realisation that our lives depend on low income labour; from care assistants and nurses to warehouse workers and cleaners, much of it contracted under conditions of extreme precarity. And if there’s one thing that the pandemic has made clear, it’s the centrality of care work. Not only the paid care work which is disproportionately done by immigrants and women of colour, but the care work at home which becomes newly visible as, for many under lockdown, the home and the workplace become one and the same. These physical and affective labours are what feminists have identified for decades as the work of social reproduction; the unvalued activity that arises from our dependencies and vulnerabilities and has to be taken care of before any economic activity can take place. Social reproduction, as it turns out, really does supersede production.

Clearly a version of tech dystopia beckons us from beyond Covid-19. AI produces both thoughtlessness and carelessness. Thoughtlessness, because the ‘humans in the loop’ won’t be in a strong position to challenge the opaque but authoritative predictions of the systems. Carelessness, because the algorithms abstract away from the myriad knock-on effects that will ripple outwards from their optimising exclusions, especially amongst the most vulnerable and least visible. The question is how to reconstitute our technologies of knowing and doing as matters of care.

In normal times, care and social reproduction are overshadowed by the detachment and abstraction that are common to AI, bureaucracy and business. A techno-politics of care starts with attention to the exclusions and boundaries of a stratified society. Our first question for any AI should not be by what percentage it has improved its score on a dataset but how its application might increase the burden of care or amplify neglect.

In the task of transforming machine learning, our greatest resource is not lakes of surveillance data but situated knowledge. Whereas any failures of AI or tech solutionism are explained away by the need for more data, their actual failure is the promotion of a science-like onlooker consciousness, a perspective that somehow sits outside the situations it is actually influencing. Feminist and post-colonial thinkers have long cast doubt on cast-iron ideas about an empirical knowledge that claims to be free of social history. They suggest that objectivity is stronger when it recognises that knowing always has a standpoint, that all knowledge is situated knowledge. Starting from these overlooked understandings of the world can be a more rigorous approach than relying on AI’s claims to neutrality.

Situated knowledge gives us a way to interfere with the automated sedimentation of injustice in post-pandemic society, by finding ways to start from the perspectives of those at the edges. We can bring care into AI by putting the perspective of social reproduction at the centre. The aim is to challenge the erasure of lived experience by the ideology of efficiency and to generate a counter project to the algorithmic production of carelessness.

The approach proposed here is to introduce into machine learning and AI structures that “slow the universalizing process by unsettling existing assumptions, boundaries and patterns of political action”, in particular the people’s council. People’s councils are bottom-up, federated structures that act as direct democratic assemblies. The mutual encounters and consensus-making of people’s councils are themselves transformative in terms of creating different relationalities. The purpose of people’s councils is to become a mode of ‘presencing’, of forcing the consideration of the unconsidered, or more fundamentally of reordering the idea of AI such that it’s production of pairings of concepts and material effects iterates towards an actually different society.

The idea of people’s councils is rooted in the social histories of workplaces and communities. Introducing them into AI means that the automation that would otherwise exacerbate the power and wealth gap is subject to collective influence. With people’s councils, no labour is invisible. Instead of allowing transcendental knowledge claims that act from outside and above to enforce a post-pandemic ordering, people’s councils accept the limitation of seeing things from diverse points of view. Instead of passing off to machine learning the task of regulating behaviour, people’s and workers councils collectivise the task of learning together how to improve our mutual well-being.

knowing, caring

The need to collectively occupy our mechanisms of knowledge production are not only signalled by the necropolitical tendencies of machine learning but a brittleness in orthodox science that has been highlighted by the pandemic. The scientific method has been refined over centuries to filter out the bias of individual scientists but remains vulnerable to cultural bias and to the parts of the process that come prior to the scientific method itself, such as who decides the questions to be studied and why. While science is successful under the narrow conditions it sets itself (‘ceteris paribus’ — all other things being equal) it is not able to provide the answers when the evidence base is lacking and the stakes are high. “More data (even ‘reliable data’) and better predictive models cannot resolve the… arbitration of conflicts and dilemmas that appear at every scale”. The same applies to our technologies of knowing; refinements based on computational statistics are swamped by bigger sources of uncertainty. Witness the way Singapore’s highly-rated Bluetooth contact tracing app counted for little when it turned out the government had ignored the thousands of low status immigrant workers packed tightly in their segregated hostels. We must be guided instead by shared value commitments. Scientific inputs can only be a part of the process that establishes our collective response. The legitimacy required for public agreement can’t be won by an appeal to higher authority but by widespread participation in the process.

The idea of post-normal science, which was first proposed in the early 1990s, deals with these dilemmas by radically extending the scientific method of peer review. The extended peer community is where “all those with an interest have a say, from the experts of various scientific disciplines, to stakeholders, whistle-blowers, investigative journalists, and the community at large.” In the version proposed here, for forms of predictive computation, the role of the extended peer community is filled by the people’s council. It seeks a robust position through different viewpoints and experiences, rather than the technocratic optimisation of disempowered people under assumption-laded models developed by the institutionally and epistemically privileged. With people’s councils as the extended peer community, the issue of behavioural modification is no longer extant, because the grassroots participation of people themselves becomes core to a successful response.

Taking together the need to centre social reproduction and the need to extend our empirical methodologies, we can say that post-pandemic computational predictions need to be embedded in ways of knowing that are inseparable from caring. Instead of predicting-preempting, we need to develop an approach of knowing-caring. This is not a substitution of sentiment for the empirical, but an acceptance of the fact that all knowing is immersed, participatory and relational. Rather than seeking to minimise distances in abstract space, for example, it seeks insights in the differences of subjectivities and experiences. The contention is that this form of analysis will act differently in the world. Instead of approaching a problem as a matter of identifying the most risky entity, it uses reflexivity to seek transformations in the shared context. For example, rather than sinking resources into deep learning models that try to predict which members of society will become troublesome, it intervenes through changes that try to improve the situation across the board.

There should be no post-pandemic future for a technology that doesn’t start from the question of social justice. AI is a form of apparatus, one that produces both meanings and material consequences. Like any experimental apparatus, its actions can be described as forms of boundary drawing practice; delineating the distinctions between this and that as a way of marking how they should be acted on. Any post-pandemic boundary drawing practices must start from a concern with the impact of exclusions. Rather than inheriting established boundaries and hierarchies of being, it should concern itself with what these structures obscure and erase. The aim of transformed machine learning will be to open up questions about borders and relations rather than to engage in brute force calculations that reinforce them. This requires the abandonment of AI as an authoritative engine for social ordering.

Solidarity is seeking to know the situation of the other and acting on it on the basis of a shared and interdependent being. To actively approach the world through knowing-caring is a form of solidarity. Knowing-caring is a way of knowing that doesn’t start with a separation between the knower and the known, but with an acknowledgement of co-constitution. This is what Isabelle Stengers calls caring cosmopolitics; being attentive and responding to the multiples of being with which we are entangled and co-constituted. Solidarity is also the political stance most strongly linked to the historical emergence of workers’ and people’s councils.

Alongside putting care in the spotlight, the popular response to Covid-19 has also seen a revival of solidaristic activity at community level, in the form of self-organised mutual aid. Much of the discourse in mutual aid groups, in between organising support around food and housing, is about how to avoid returning to the social neglect of ‘business as usual’ or worse, as neoliberalism imposes another round punishing austerity. This chapter warns of the danger that post-pandemic inequality will be supercharged by technologies like AI that are claiming to manage risks and solve problems. It proposes people’s councils as a way to interrupt this hegemony with views from the community and workplace that prioritise care. It also suggests that this is part of a wider project of refiguring our scientific and computational structures as forms of ‘knowing-caring’.

To start with, we can examine every situation where AI or its ilk are offered as solutions and ask instead how risks and resources can be dealt with through a radical commoning. As Donna Haraway reminds us, our intra-actions and interdependencies stretch across vast fields of biota and abiota. Nevertheless “the doings of situated, actual human beings matter. It matters with which ways of living and dying we cast our lot rather than others”. Change starts with collectives who are prepared to take on the necessary activities of repair and resistance. The modelling which needs to take priority is not that delivered from on high by vast structures of computation but the modelling to each other of forms of mutual aid. Reclaiming political agency from engines of abstraction means starting from a standpoint of solidarity.

--

--