Making AI work for us Part 1: AI and the effort of labour

Disruptive Voices
Disruptive Voices
Published in
5 min readSep 9, 2021

This blog post is part of the UCL Public Policy, UCL Grand Challenges and British Academy project on AI and the Future of Work. This project aims to identify current knowledge within this space, seek questions in need of further investigation, and make recommendations for policy. As part of this, UCL’s Helena Hollis has been exploring what “good” work means alongside AI.

iStock.com/metamorworks

What does it mean for work to be “good” - and how might AI technologies impact that? This is something I’ve been thinking about since joining the UCL Public Policy team, working on the AI and the Future of Work project. Having taken an interdisciplinary academic journey from philosophy into information studies to bring me to this project, I’m interested in how we can frame our thinking about these sorts of issues with philosophical underpinnings, but also in how we can translate that thinking into practical and socially just action. I hope to capture a bit of both here.

In the first of two blog posts, I reflect on an interview with Professor Geraint Rees, UCL Pro-Vice-Provost for AI. We discussed ways of conceptualising “good” work — and delved into how AI has both the capacity to free us up for better work or to degrade our working experiences. We also explored what good regulation might look like — but more on that in part two.

What is good work?

From his experience in medicine, Rees notes that there are striking differences in how healthy people, physicians, or people with serious illness balance what “quality” in life really means. Similarly, what “good” work means to someone working in an Amazon warehouse is likely to place emphasis on different facets than someone working in the corporate office. In this way, the question of “what is good work?” is akin to the question of what a good life is, and we have the whole history of philosophy to demonstrate how many varied answers can be given.

One potentially useful philosophical framing could be Hannah Arendt’s distinction between labour, work, and action. To coarsely summarise this, labour is what we do to survive, and it leaves nothing behind. Work is constructing things either physically or socially that have the capacity to last. Action is defined by freedom, enabling originality, taking place in the public sphere.

AI is often cast a liberator from labour, taking away the tasks we do not want to do, but do out of necessity, to make rent, to survive. It also offers enhancement to work, by helping improve the things we make and the processes by which we make them. In its most utopian framings, AI replaces all necessity and instrumental work, and leaves us entirely free to pursue lofty action.

But, of course, AI replacing labour is only a good thing if another means of survival is provided. And AI is not only shifting the balance away from labour and towards more interesting work, in many cases it is doing the very opposite. Nor is AI progressing evenly, leaving open the question of who will have to continue to labour while others are liberated by AI.

Furthermore, many jobs may be labour, and also work at the same time, slipping between the cracks in my appropriation of Arendt’s categories. For instance, the failure of robots to flip burgers suggests this form of labour is here to stay at least a little longer, but the minimum wage workers doing the burger flipping may be both doing so to survive, and also to make things — the burgers may not last long, but something more than the filling between buns is created in the diners’ experience and in our food culture.

I think Arendt’s categories show how useful philosophy can be in grappling with big but very important issues (other philosophical framings are available). Using this type of classification can help us to understand the realms where we do, and do not, want AI to intrude, and how it might make work more or less good in a broad way. Where we find liminal spaces of work that evade these classifications, we should shift our focus to the individual, and what makes work good or not on a more case-by-case basis.

AI in capitalism

Arendt’s distinction between work and labour also highlights the way freedom is limited or lost when we focus on survival, rather than focusing on creating something. But survival can perhaps be extended from Arendt’s meaning, as in affluent countries labour arguably is for more than literal bodily needs of shelter and calories. We consider a plethora of consumer goods as necessary needs.

In our conversation, Rees suggested the biggest risk with AI lies in the context within which it is developed. In Western societies, this is a backdrop of capitalism which values work as a means to the end of consuming. So work can become subsumed into labour: the pleasure and value of making something becomes just constructing something to sell, so as to buy. AI imitates us, replicating the systems it is designed within and fed examples of. This means there is a risk that AI will automate and intensify our consumption, making our working experiences more about the stuff we can buy than the purpose of the work itself.

AI could perhaps be used to increase our access to Arendt’s action — supporting our engagement with the public and communal sphere, intellectual pursuits, the creative arts, discovery science, and more. AI projects to support such activities exist, but not in the applications we encounter on a regular basis. Where we see AI making progress into everyday work it is largely about optimisation in areas such as assembly lines, logistics, and advertising. AI enables making and distributing goods, and enables hyper-targeting of individuals for consumption, often exploiting our attention spans. As Rees noted in our conversation “we may be seeing the capabilities of AI married to the more maladaptive aspects of capitalism”, and we should consider what this means for our balance of labour, work, and action.

Regulation

So how do we steer AI development in work towards providing us with less labour, more meaningful work, and space for action?

If we want to decouple AI development from accelerating consumerism, it has become increasingly clear that this cannot be left to the companies operating within capitalist motivations. Some external regulation is needed. But, as Rees pointed out to me, “nobody says I want to grow up and be a regulator.”

Yet, this work has the potential to offer the most satisfaction in making worthwhile outcomes, at the forefront of ethics and society. Regulation could be a form of Arendt’s action, creatively operating for social good. Rees argues we need to make regulators the vanguard of how our societies are reshaped, turning this into a highly prestigious job. To me, this sounds like we need to think about how we ensure “good” work for regulators, as a first step in ensuring good work for all of us.

I’ll pick up this thread in my next post — coming soon.

_________________________________________________________

About the author

Helena Hollis is a UCL PhD researcher in Information Studies, and is also working on the UCL and British Academy project on AI and the Future of Work.

--

--