Self-driving cars between hype and realities
By Sara Sbaragli (CNR-ISTC), Vincenzo Punzo (UNINA), Giulia Vannucci (CNR-ISTC) and Andrea Saltelli (CNR-ISTC)
One day in a not-too-distant future all traffic might be AI-driven, with humans reading the news, sleeping, or otherwise engaged in the comfort of their cars or, better, in the comfort of a shared vehicle. Important cultural, technological, and regulatory obstacles will have to be surmounted to achieve mandatory, AI-only traffic, with humans confined to drive in special resorts. In the meantime, the future will likely offer situations where AI and humans will coexist in the same driving environment.
Will this ever be possible? Human-driven and AI-driven cars on the same road? Everywhere in the world, from Berkeley to Naples?
The problem of this coexistence is in the variety of cultures and driving styles, that remain obstinately local in spite of globalization. An AI-driven is more likely to smoothly negotiate its way in the traffic of well-behaved Monaco, less so in that of Mumbai or Mexico City.
The main problem here is the reflexivity of human agents. Where aggressive driving styles prevail, a human driver may make life hard for automated vehicles by exploiting AI’s cautious behaviour, for example by executing sharp cut-in manoeuvres that push the automated car back into the queue. This would be the case unless AI is allowed to start negotiating its road space with human drivers accepting the resulting collision risk.
This is where things could get messy, with AI over-compensating by adopting reckless manoeuvres to ‘win’ against unruly humans, in sci-fi, Black Mirror-like, scenarios of killer cars or lorries. Something of the sort happened a few years back, when an AI from Microsoft took just a day to become racist and had to be hurriedly taken offline. Of course, technology will kick in to prevent this from happening. Assuming thus that car makers take up the challenge of human-AI coexistence, how will they manage an AI training process that needs to adapt to local scales — possibly at the level of the region, more than that of nation states; in Italy, drivers in Bolzano don’t resemble those of Milan that are different from those of Rome, Naples and Palermo. The topic of sensible training and testing of AI in everyday situations, even away from ‘edge’ settings of life or death situations so liked by ethicists engrossed with runaway trolleys, is tackled in a recent piece on the Proceedings of the National Academy of Science, a US base journal. How can one test the “Common sense” of an automated car? While noting that reciprocal customization will likely be needed among humans and automated drivers, the piece offers useful advice to the would-be testers, such as not taking humans as a gold standard, considering the trade-off between human authority and machine autonomy in view of safe outcomes, carefully consider the local conditions of cultures, driving styles and road morphology.
These calibrations will need AI’s to interact with models of human drivers especially built with this purpose in mind. The apparently unsurmountable task of developing these models to test AI’s has been taken up by a European project named i4Driving, whose objective is to develop models of human driving in naturalistic conditions for use by car makers, regulators, and academia in their testing, with the ultimate goal to accredit AI systems for deployment in the market.
The vision of i4Driving is to lay the foundation for a new industry-standard methodology to establish a credible and realistic human road safety baseline for the virtual assessment of Cooperative, connected,and automated mobility systems (CCAM [1]). The two central ideas proposed in the project are a simulation library that combines existing and new models for human driving behavior; in combination with a methodology to account for the huge uncertainty in both human behaviors and use case circumstances. Treating uncertainty is crucial to the success of the technology, and to this aim experts from different disciplines are called by the project to judge modelling assumptions and outcomes. The project team is simulating (near) accidents in multi-driver scenarios (access to many data sources, advanced driving simulators, and field labs) in an international network [2]. i4Driving offers a proposition for the short and the longer term: a set of building blocks that pave the way for a driving license for AVs.
Uncharacteristically, for a technology-oriented project such as i4Driving, this study will also look at the cultural dimension of the broader issue of autonomous driving, to the effect of studying the project’s engineers at work in relation to their world-view, expectations, and possible biases. For these reasons the technologists involved in the i4Driving team will submit to general questions such as “In the likely event that this technology will require investments in public infrastructure, will society support them? A huge amount of big data will be collected about human drivers. How shall these data be used by authorities, advertisers, hackers?”. These questions are not for i4Driving to solve, but the modelers’ vision on this topic is necessary as per a reflexive sociological component built into the project.
Certainly, in the current state of technology, many criticisms are raised about the actual feasibility of self-driving cars. Despite the advertising efforts of the last decade on upcoming commercialization of self-driving vehicles, such vehicles to date still are deemed unsafe, as well as an obstacle to current traffic or to e.g., emergency vehicle operations. While promising “full self-driving capability” — with apparently ‘staged’ videos, TESLA had to defend itself in court that this marketing claim does not amount to fraud. The recent news that GM’s Cruise cars were supported by “a vast operations staff, with 1.5 workers per vehicle”, with operators intervening to assist the cars every 2.5 to five miles, is likely to stoke additional controversy.
It would seem then that conflicted visions battle on this issue, as on other new technologies linked to artificial intelligence. While technology has accomplished significant progress to make automatic vehicles viable, society will also need to play a vigilant role one the promises of manufacturers. As noted by a scholar, automated vehicles will not be defined by their supposed autonomy; they will be defined according to how society will negotiate the solution of political questions such as Who wins? Who loses? Who decides? Who pays? By keeping both sides of the issue open to investigation, the project i4Driving is offering a contribution.
References
[1] Cooperative, connected and automated mobility: In many respects today’s vehicles are already connected devices. However, in the very near future they will also interact directly with each other as well with the road infrastructure and other road users. This interaction is the domain of Cooperative Intelligent Transport Systems (C-ITS), which will allow road users to share information and use it to coordinate their actions. This cooperative element is expected to significantly improve road safety, traffic efficiency and comfort of driving, by helping the driver to take the right decisions and adapt to the traffic situation.
[2] The international network has partnered with the US (NADS facility); Australia (UQ advanced driving simulator and TRACSLab connected driving simulator facilities), China (Tongji Univ. 8-dof driving simulator and large-scale field lab) and Japan (NTSEL), and several project laboratories (Universities and research labs, OEMs and Tier 1, vehicle regulators, type-approval authorities, standardization institutes, and insurance companies.