Freedom in the Fourth Industrial Revolution

Luca Collalti
FARSIGHT
Published in
5 min readNov 11, 2020
Designing new technologies that aim at liberating human beings is first and foremost a matter of how to define and re-define human work and freedom in the age of automation.

Automation is arguably one of the most discussed and controversial topics of our times, and hardly a week passes by without new and surprising applications for collaborative robots (cobots) and AI being covered by the media. One sure thing about this new industrial revolution (the fourth one, also called Industry 4.0) is that it is a growing, global phenomenon and that it is not going to stop anytime soon. At the same time though, the impact it will have on the job market and society at large is still unclear.

Among institutional actors and tech companies alike, there is a widespread expectation that if the transition to Industry 4.0 will be handled correctly, automation will create more jobs than those it will render obsolete, and humans and machines will peacefully complement each other, just like the narrative about the previous industrial revolutions goes.

Both the marketing material of many cobots producers and the reports from actors such as Manpower and the World Economic Forum (WEF) present scenarios in which machines will increasingly take over repetitive, wearing, and dangerous tasks, and humans will be free to pursue more skilled and fulfilling jobs, all while increasing productivity and efficiency.

Clear win-win.

Or not? Because not everyone shares this optimistic view. For instance, in his bestseller The Rise of the Robots, technology expert Martin Ford argues that, unless significant structural changes are made to our societies, the impact of this new wave of automation on capitalism as we know it will inevitably turn it into a dystopian techno-feudal system in which the rich control both capital and labour (collapsed together into the machines) and most people simply will be left with no bargaining power in economic relations.

These two narratives, the optimistic and the dystopian seem to be the opposite of one another, but they actually share a common starting point: namely that the success or failure of the transition towards Industry 4.0 will depend on how society adapts to automation, rather than the other way around. Indeed, the WEF report referenced above includes recommendations for governments, for industries that will deploy robots, and even for workers on how to handle the transition to this new wave of automation. But what about the companies that develop these technologies to begin with? Do they not bear any responsibility?

It was with this question in mind that I recently conducted anthropological fieldwork in one of the most successful firms of the blossoming Odense Robotics cluster, located in Denmark.

During my research, I learned that, whereas the form of robots’ liberating potential appears to be clear in the minds of roboticists (i.e. taking care of undesirable tasks), the engineers’ vision of what this new freedom looks like (from the point of view of a worker whose job is automated) is more blurred and mostly limited to mentions of the possibility for re-skilling or up-skilling the work force. This is not only because robots can be deployed in many different contexts which makes it difficult to imagine a single way for such freedom to actually be practiced, to borrow philosopher’s Michel Foucault’s term. It is also because the responsibility of envisioning and realising such practices of freedom (basically how to do freedom) is seen by engineers as external to them. Hence, the roboticists themselves share the idea that others, typically decision-makers and industrialists, will not only have to prevent the robots from causing unemployment but that they will also have to define the practices of freedom through which unemployment can be avoided to begin with.

The problem with this view (and with the popular narrative about previous industrial revolutions) is that historical analyses of the implementation of industrial machines show that down-skilling has often been an effect of the implementation of automation, sometimes even an intentional one from the part of the industrialists. Likewise, contemporary anthropological research shows how the introduction of new technologies, in both the private and public sectors, is often an imposed, top-down process that can be a threat not only to employment but also to employees’ professional identities and job satisfaction.

Embedded in this widespread omission of the responsibility of tech companies lies the core assumption that technologies in themselves are neutral tools with no built-in politics or ethics, which is why the focus is often only on how they are used, rather than on how they are designed.

But this assumption is simply not true.

Since the early 1980s, scholars within Science and Technology Studies (STS) and Philosophy of Technology have shown compelling evidence that technologies themselves are always political and, as such, always bear ethical implications. This is because designing any artefact is not just a matter of problem-solving; of pragmatic “how do I get X to do Y” type of questions in which engineers usually frame their work since doing so also includes establishing what X and Y are to begin with, and what they ought to become.

In the case of automation then, designing new technologies that aim at liberating human beings is first and foremost a matter of how to define and re-define human work and freedom and how to inscribe these views and values in the different technical features of the robots.

However, if in the field of AI at least some conversations about ethics and politics are happening, during my fieldwork I have learned that it seems like the same cannot be said for robotics. Indeed, concerns about the ethical and political implications of robots were often met with scepticism and perplexity by my informants because they usually could not see the need nor imagine a place for such concerns in the way technological development at large is currently structured, conceived, and incentivised. Therefore, in their eyes, the non-conformity of such considerations with the current system makes them unworthy of even being discussed.

If we have learned anything from the development of social media platforms though, the public perception of which largely shifted from democracy-enhancers to democracy-underminers in less than a decade, it is that the ethical and political implications of technologies must be addressed, no matter how difficult that might seem, and the sooner the better.

This is not to say that the task is easy – very much the opposite. That is why this responsibility cannot be placed solely on the shoulders of corporations or public institutions, nor on the engineers alone.

Rather, a truly successful and liberating version of Industry 4.0 requires a multidisciplinary effort that includes all of these actors plus, I believe, a particular contribution from the side of the Social Sciences, particularly Techno-Anthropology, tasked not only with observing and reflecting on the different forms in which questions about the future of work and the role of technologies in society will present themselves in various contexts, but also with bringing attention to such questions and providing engineers and their companies with some of the tools to address them.

Originally published in Scenario Digest

--

--

Luca Collalti
FARSIGHT

I am a Techno-Anthropologist with a strong interest in Science and Technology Studies and the politics of techno-science.