Europe wants to decide who will pay for the damages of AI

Vincenzo Tiani
DigitalExplained
Published in
5 min readSep 14, 2020

Last 19th February the European Commission published its white paper on Artificial Intelligence. A European strategy on the development of artificial intelligence in Europe was long awaited since it was announced as a priority of the new von der Leyen Commission. Attached to this policy paper is another less commented but equally crucial report on the security and liability implications of artificial intelligence, robotics and the Internet of Things (IoT).

For some time now, the question has been asked: who will the victim of an accident caused by a self-driven car take it up with, the company that produced the car or the company that supplied the artificial intelligence? It is therefore questionable whether the current rules in the field of product liability and safety are able to adapt to the technological evolution we are experiencing or whether they need to be, if not rewritten, at least updated. Clear rules are of paramount importance to ensure that both citizens and businesses are not afraid to fully embrace technological innovation.

There are many different critical issues to deal with. On the one hand, the opacity of the decisions taken by artificial intelligence and the lack of transparency of the algorithms that regulate it make it difficult to understand how to prevent potentially harmful decisions. On the other hand, the problem of cybersecurity that will be a central investment as more and more devices will be connected and will dialogue with each other constantly.

However, it should not be forgotten that the entry of artificial intelligence into the production process will also bring benefits to the security of products and services thanks to the possibility of being able to analyze and exploit large amounts of data on their performance. The interconnected autonomous vehicles will lead to a reduction in accidents largely caused by distractions and human error. If Tesla already updates the software of its cars like a smartphone, the same will be done for all other connected products without having to go through the dealer, but comfortably from home.

New ways of looking at security

The European Union already offers a number of standards that have aligned the quality standards of consumer goods. There is the 2001 General Product Safety Directive which ensures that goods circulating in the European market are safe, not harmful to health and the environment. There are also special rules for the automotive and transport sectors, for medical instruments, which already take into account the technological integration within them.

As we said, the autonomy that distinguishes the decisions of artificial intelligence can raise quite a few problems in terms of liability. The general rule is that the producer must take into account the risk that the product may pose throughout its entire life cycle and must adequately warn the consumer of these risks. With artificial intelligence, which is something dynamic and constantly changing as it constantly learns, this risk calculation will have to be done throughout the entire product life cycle. That is why it will be essential that human control is always vigilant to prevent unwanted and unforeseen effects.

Other types of new risks will be those related to the mental health of those who interact with devices or robots that integrate artificial intelligence. Has thought been given to the long-term impact on the mental health of those who constantly use Alexa, Google Assistant or Siri? And the difference between this impact on younger and older people?

As mentioned at the beginning, further problems arise with the interconnection of intelligent objects or even just with the integration of artificial intelligence into other products. In this case, the risk assessment must also take into account the plausible interaction that smart things will have with each other. In the transport sector, both the risk for each individual component and the final product as well as the interaction of the product in the wider transport system is assessed.

If the software update is such as to change the product substantially, a new risk assessment will be required. In the case of Tesla, mentioned earlier, the update led to the car reaching 100km/h in less time. This update may need to be revised in order to recalculate the impact of the risk to users.

Finally, the principle that the manufacturer who puts the product on the market is responsible for the entire chain remains solid but may need updating and revision.

Liability

So let’s go back to the example of the car accident. Today the owner of the car is required by law to be insured. This obligation does not apply to the car manufacturer. In a future of autonomous cars so this situation may not change. If I have an accident, I will be compensated by the insurance company, which will be able to claim back from the manufacturer of the car in the event of a defect. But if the introduction of artificial intelligence makes it more difficult to understand who is ultimately responsible, insurance companies may be less inclined to take out certain policies with a consequent impact on the adoption of these technologies. For this reason, it is essential that legal liability is always traceable.

However, in the consumer sector, the report highlights how, given the technological complexity of these new devices, it may be too burdensome for the consumer to demonstrate the fallacy of the product. That is why a proposal under consideration is to reverse the burden of proof: if the manufacturer has not complied with safety requirements, then the product is considered defective unless proven otherwise.

As far as cybersecurity is concerned, while the provider of the software cannot predict all the attackable flaws but has the duty to update it in order to keep it safe, in the future there could be an alleviation of liability if the user has not updated it. This means that a minimum of skills, such as the ability to update phones, tablets and laptops and their respective applications, will be increasingly necessary for the future to not see the compensation required decreased.

Finally, the problem of lack of transparency in algorithm decisions. The Commission is trying to find out whether it would be appropriate to apply the principle of strict liability to providers of artificial intelligence solutions. This principle normally applies to car owners in the event of an accident or for those carrying out dangerous activities such as the operation of a nuclear power plant. In that case, the burden of proof is reversed in favour of the victim as the AI provider may not want to show the data that can prove liability. An obligation to take out insurance as foreseen for motor vehicles might also be expected.

We will see in the coming weeks whether and how the Commission will take account of suggestions from civil society and other stakeholders.

Originally published at https://www.linkedin.com.

--

--

Vincenzo Tiani
DigitalExplained

LL.M. #Copyright #GDPR / 👨‍🏫 Adjunct professor © and privacy/ 📰 Contributor @wireditalia & others / 🎙️ Podcaster : Il Digitale Spiegato & Digital Explained