Forward with Virgilio: towards a more social, rather than artificial, intelligence

Alfredo Adamo
Alan Advantage
Published in
5 min readJan 4, 2019

In a former article, we had an overview on the notion of Society-in-the-Loop (SITL) and how it could support a more socially acceptable hi-tech development. To sum it up, a SITL model is the extension of a pre-existing system, which involves a more active role from the human side on computer supervision, namely a Human-in-the-Loop system (HITL). However, the recent, notable technological development empowered machines with broader social implications, compared to precise actions and well-determined calculations. Hence, we need a supervision and a decisional power way wider than what a single supervising individual could provide.

It is necessary, therefore, to create a loop of reciprocal supervision which could integrate society as well (that is rights and interests of users and third parties, aside from the programmers’ and the experts’ ones) in deciding what a machine, or a bot, is allowed to do and how it should be used. Nonetheless, integrating society in the loop means also that we should define the social contract which unifies and regulates the interests of the various social actors. It is necessary to define what a “regulating society”, in the supervision of computer programmes, should be. That is to say, there is the necessity of understanding what could better satisfy the expectations of the social tissue’ s various components.

To quote Iyan Rahwan (2017):

“We spent centuries taming Hobbes’s ‘Leviathan’, the all-powerful sovereign. We must now create and tame the new ‘Techno-Leviathan’.”

Hereafter, we try to sum up the main reasons why a SITL system has not been implemented yet and some possible actions to make it happen, with a particular initiative of ours.

A fundamental problem is the disciplinary gap existing between machine programming and the legal, ethical values of the social sciences to integrate in the loop. Although professionals and scholars in the various social and legal subjects are able to identify possible computer misbehaviours, it is not always so simple to mathematically articulate how a machine should behave in order to not inflict moral and ethical damages.

In brief, it is extremely complicated to quantify mathematically the notion of “fair”, “correct”, or “acceptable” and to program the algorithms of computers so to respect the social actors’ expectations about computers’ behaviour. Furthermore, the human-computer relation is also characterised with reciprocal learning. Human users’ IT skills are in constant evolution, changing what the wider social context deems acceptable.

Plus, modern demography does not help social integration, since it deals with million different interests in constant change. It is possible, though, to draw the social and technological realities near with different efforts. One of these is the use of crowdsourcing on the internet, namely collecting data and opinion via public surveys and statistics. This would create a database of qualitative data, gathered via machine design processes for a precise evaluation of the different interests involved.

Rahwan provides a rather interesting example with a crowdsourcing experiment similar to a public survey, named Moral Machine. The participant are exhorted to answer ethical dilemmas as “if this self-driving car is doomed to crash, is it better that it kills x amount of pedestrians (including a pregnant woman, for instance) or the passengers (including other kinds of people)?”. Collecting all these data can tell us a lot about how the wider social context thinks on such ethical, moral dilemmas.

Photo by Rohan Makhecha on Unsplash

However, analysing the general, social preferences is not enough for a sustainable development. It is necessary to control the algorithms’ and programmers’ behaviour as well. The possibility of having a human audit has been long explored in the field of AI, starting with the HITL system. According to Rahwan, nevertheless, a human audit is not sufficient for a constant supervision. He prefers a more accurate algorithmic audit. In a paper from Amitai and Oren Etzioni (2016) we find a well-defined explanation of the ‘oversight programs’, namely supervising programs providing a steady and precise control of other machines’ performance, not only in a first simulation or in a virtual environment. According to them: “To ensure proper conduct by AI instruments, people will need to employ other AI systems.”

Relying on computers for the supervision of other computers, however, is rather tautological. This does not solve the main puzzle of our research by all means, that is the integration of the human component and the humanistic studies in the hi-tech universe. It is necessary, in any case, a human supervision of computers’ algorithms.

To better identify the aspects which could ease the diffusion and assertion of a SITL approach, Alan Advantage has started the research project “Virgilio”, collaborating with one of its main scientific partners. By means of a holistic and innovative approach, both in its method and content, Virgilio is a methodological framework which moves beyond the horizon of traditional analysis methods, allowing Alan Advantage to provide consulting services for the adoption of AI technologies in the business environment.

Three overviews (Company View, Function View, and People View) allow us to develop a framework which could include an accurate technical and financial analysis, also stressing the importance of human resources and the company’s approach in the new AI systems. Indeed, we believe that the social aspects inherent in the SITL approach have an indispensable relevance in the business environment, for companies of every dimension.

Photo by Adam Winger on Unsplash

As Virgilio in Dante’s Divine Comedy, our methodological framework serves as a guide in the various strategic decisions to better invest in AI technologies. This allows us to provide consulting services to better balance the new investments, both improving pre-existing strategies and laying the foundations of new systems within the company.

The implementation of such a framework, aside from removing every hassle in the decisional process, meets our vision. Involving the wider social and business context can exploit more efficiently the power of Artificial Intelligence. Introducing new technologies does not necessarily mean sacrificing the human component. Quite the contrary, a movement of reciprocal learning can improve both the parts involved. Virgilio is only one of the many initiatives Alan Advantage puts forward, but is not the only one it means to propose.

Bibliographic References

Etzioni, A.; Etzioni, O. (2016). AI assisted ethics. Ethics and Information Technology, 18 (2), 149–156.

Rahwan, I. (2017). Society-in-the-Loop: programming the algorithmic social contract. Ethics and Information Technology. DOI 10.1007/s10676–017–9430–8

--

--

Alfredo Adamo
Alan Advantage

Experienced Executive with a demonstrated history of working in the management consulting industry. Skilled in Business Modeling, Innovation Management, AI