Democracy and AI: integrating Society in the Loop
The introduction of Artificial Intelligence in the various aspects of our life, from work to daily life, as voice assistants and different smart gadgets to adjust domestic lighting or heating, brought a new wave of enthusiasm for such technological miracles. The market constantly craves brand-new, avantgarde products and countries and companies compete to be leaders in unique innovations, contributing to the fast techno-scientific development characterising our age.
However, in a former article we have seen how an uncontrolled technological development, only for the sake of excelling in the hi-tech field, neglects the main focus of such development, namely being a support for humankind in the first place. Indeed, the unrestrained run to state-of-the-art innovations raises not few doubts about ethical and efficiency issues. How can a robot, programmed according to a specific algorithm and a precise code, know what’s right to do, aside from performing the actions he’s been told to do?
Thus, in the last years, many efforts were implemented to reintroduce human supervision in the robotic field. One of the goals in the theory of robotics has been the introduction of men in a circle of constant evolution, with the concept of Human-in-the-Loop (HITL), consisting in a much more careful human supervision of the robotic performance in a movement of reciprocal learning. In the last decades, in the supervisory control field, the idea of strengthening the human component in mechanical processes was already under way. But it is only recently, with the evolution of human-computer interaction, that we are starting to understand the relevance of the human presence in AI machine learning processes.
Integrating men in the loop means emphasising a greater control of the process in which both the AI and the human supervisors learn reciprocally from each other. An extremely simple example is the spam folder in one’s email address: labelling as “spam” the unwanted emails, the computer will understand which emails, from which addresses, it should automatically forward in the spam folder. In this way, men take part in the learning process of the machine’s algorithm, which, in turn, learns how to serve men at best.
Furthermore, according to Iyad Rahwan (2017), professor at the MIT Media Lab, integrating humans in the loop is fundamental for two important functions: 1) men can serve as supervisors in the case of machine misbehaviour (as the necessary supervision on war drones); 2) men can be held legally accountable in case of law violations and damages to other humans, minimising the probability of misbehaviour at the expense of third parties.
Nonetheless, the European Union has in project to compile a body of civil laws which could make computers and bots legally accountable for their “actions”. A motion has been approved in 2016, aiming at the attribution of legal responsibility to computers and the establishment of a European agency for the regulation of hi-tech production in the Union, aside from several ethical considerations on the use of machines.
However, what happens when a computer is entrusted with a task implying a broader social impact, as an algorithm which could influence mass political preferences or mediate resources and labour within the social context? Iyad Rahwan has, thus, proposed an extension of the HITL system: if a HITL AI is based on the judgement of an individual human for single, precise computer tasks, an AI with broader social implications must be based on the judgment of the wider social context. In this way, he introduces a Society-in-the-Loop (SITL) system. Here, the main difference is that a SITL system supervises not only the machine’s performance and behaviour, but also the moral and ethical implications of its programming codes and algorithms, safeguarding the users’ rights.
Society, though, is made up of individuals not always agreeing on what is right and what is not. It is necessary, therefore, to mediate between all the different interests of the various social actors, of both the governing and the governed spheres. Such a problem arises in the moment we lack a precise definition of what is the social contract regulating the society we live in. To solve the puzzle, we should take a step back in time and try to define what is the origin of the social order as we know it today.
A first step in the social contract theory was taken by Thomas Hobbes, who, in his Leviathan, saw the social order in the government’s ability to invest third parties (the institutions) with the contractual power to regulate the other individuals’ lives in the social system. However, Hobbes saw the beginning of the social order in the compromise the single individuals had to accept, entrusting part of their freedom to a central institution, embodied in the figure of the undisputed monarch (hence the leviathan), to live peacefully in a lawful society. It was Rousseau who saw the central government as a regulating power not only to ensure peace and order, but also for the general will of the people, being legitimised to rule by the same citizens.
But how does this philosophic detour pertain to our research of a social supervision of the robotic world? Because, as stated by Citron and Pasquale (2014), in the same fashion, who decides and programmes machine’s algorithms and functions has a significant influence on others’ lives: without a democratic regulation in the hi-tech world, we could be going towards “a new feudal order”, in which a restricted circle of people decides for many.
It is necessary, therefore, that the hi-tech world becomes a projection of the effective, social democratic system. A SITL model (which Rahwan identifies in the equation “SITL = HITL + Social Contract) can extend decision-making power to more people, and not only to the single supervisor. A HITL model does not involve two fundamental aspects a SITL model does: 1) common sense choices an AI cannot face, since it does not possess the moral a human mind does (as choosing between efficiency and safety or favouring what is right to do although not mathematically exact); 2) understand the social costs and benefits in the implementation of a new innovation in the market. If a self-driving car has to pay more attention to the driver rather than to a pedestrian, or the other way around, it cannot be decided by the machine itself. It has to be programmed beforehand.
We should move towards a more democratic, responsible vision of the hi-tech world, integrating the various social interests in a loop of reciprocal learning. Human supervision is not enough. It’s necessary to supervise the humans behind the machines as well.
According to a last-year article on the MIT Technology Review, Italy has ascertained itself as a hi-tech exporting country. Italian engineers and programmers’ skills are renowned worldwide and, at least until last year, Italy was Europe’s second-largest manufacture, with some of the most sustainable manufactures on the continent; Europe’s third-largest exporter of flexible manufacturing technologies, including robotics, with US$9.6 billion exports only to the United States; and among five nations worldwide with a manufacturing trade surplus exceeding US$100 billion.
Alan Advantage, with various initiatives as Re:Humanism Art Prize, proposes a cultural change. It encourages many outstanding talents, seen the exceptional Italian potential, and a deeper interest in the humanistic studies, extremely relevant to study, determine and inspire a clearer definition of the Social Contract, as we’ve seen it above. Italy has the potential to not only propose itself as a new manufacturing reality, but also to lay the foundations of a new model, aimed at a wider social inclusion in its innovations.
Bibliographic References
Citron, D. K., & Pasquale, F. A. (2014). The scored society: due process for automated predictions. Washington Law Review, 89, 1–33.
Rahwan, I. (2017). Society-in-the-Loop: programming the algorithmic social contract. Ethics and Information Technology. DOI 10.1007/s10676–017–9430–8