Mom, the Robot Ran over the Dog

Jessica Lis
In The Ether
Published in
12 min readJul 15, 2019

Artificial intelligence presents many fascinating ethical and regulatory challenges. The concept of liability has attracted a lot of discussion and philosophizing.

AI has not reached, let alone surpassed, human intelligence levels. Hence, it is inevitable that a system or device using AI will make mistakes and cause accidents. In the event this mistake or accident causes harm, who is held accountable? Which actor along the AI supply chain is responsible?

One system of regulation will likely not be effective for all types of AI and a combination of regulations will have to be implemented. For example, an autonomous vehicle that kills a pedestrian will require a different regulatory system to a bot that presents a biased decision in an automated job application.

n the context of AI, the subject of liability is expected to shift away from the user and towards the product itself, or the actors responsible for the product. These changes will create a new technological and economic landscape that will offer the opportunity to redesign regulatory and liability rules.

A nimble balance between innovation and safety will be necessary in order for the liability regime to enable the continuous development of new technologies. This post will examine the issue of liability in artificial intelligence. It will explore the question of where legal responsibility falls when an AI system causes harm.

Current Landscape

This issue is significant because it is heavily entwined with other technology policy uncertainties. Not holding artificial intelligence accountable in some type of liability regime could lead to the normalization of various associated problems such as: human rights violations as a result of racial bias, loss of life in a carefree autonomous vehicle regulatory landscape and severe health issues as a result of a misdiagnosed treatment.

Liability is an important component to the large issue of technology, and more specifically AI, ethics. As Google’s failed ethics advisory group illustrates, there is a significant amount of disagreement surrounding these issues and reaching a comprehensive solution will be a difficult challenge.

The European Parliament adopted a resolution in February 2017 that contained recommendations for EU-wide legislation to regulate “sophisticated robots, bots, androids and other manifestations of artificial intelligence,” as well as to establish the relevant legislative instruments to handle the liability for their actions.

With the exception of the application of precedent, including the debate about whether a products liability or another tort liability framework is more applicable, there is no existing overarching regulatory framework addressing this issue. Law professionals, insurers and regulators all face challenges when presented with this dilemma. There is a gap in research regarding the treatment of artificial agents and hence, a lack of regulatory guidance in this area.

Most insurance policies in place today that protect consumers against liability do not address the unique risks in the context of AI such as: systems acquiring datasets in the future and evolving into different systems or not being able to trace a decision back to the original algorithm due to the ‘black box’ effect. There are many relevant actor groups with interest or involvement: this includes governments/regulators, legal professionals, insurers, developers, manufacturers and end users.

This dynamic can become quickly complicated. If the liability is placed on the developer, they may be individually deemed criminally liable via criminal or civil court. In this case, causation and intent also become increasingly complicated to determine when in the context of an AI system.

It is in the interest of insurers to adapt their policies in order to guarantee a high level of client protection. Many types of insurance policies are affected: liability, property, auto, health and life. The impact of new technologies into each of these traditional insurance categories shows that a new type of risk assessment system is needed, as the current one continually chases new uncertainties.

Along with a new risk assessment system, new models of liability coverage will also likely be adopted by insurers. There will likely be a significant shift away from the consumer and towards the manufacturers and developers. Implementing joint and several liability would mean that the actor with the largest amount of resources pay most of the damages, which is often the manufacturer. The three main liability regimes: strict liability, negligence, and no-fault mandatory insurance cannot fully cover AI systems as they currently are.

Determining Harm

In the context of harm related to products, claims often pertain to physical injury, property damage and the loss of the product itself as a result of malfunction. This is applicable to AI-based programs and devices, but not comprehensive because it does not include the factor of intelligence, which has the capacity to widen the potential harms. Normal harm liability ceases to apply because the product is no longer just a physical or digital object, but becomes somewhat of an active subject.

AI-based products could lead to new types of harms that the current laws, including tort and product liability laws, have not yet encountered. For example, this could include privacy related harms such as an AI-based bot sharing sensitive personal data to a third party without your consent.

Another new type of harm that could be considered within an AI-based product context is autonomy-related harm. An example of this would be the loss of autonomy that could be experienced if an AI-based home safety system falsely believes a family member is a burglar and locks the person in the house. AI-based products can also cause economic harm in various ways.

Liability Options for AI

Liability in AI systems will likely take many different forms more specialized to specific needs of certain systems. BPE Solicitors, a UK law firm specializing in commercial law, have begun to outline several relevant liability options that could be applied to AI systems. Several options from other sources are also outlined here. These are presented in no particular order of significance and come from the US and UK legal systems.

(1) Data protection regulation law offers consumers some protection against the way automated decision-making systems use their personal data. This is intended to give transparency and reduce bias as individuals can access to decisions regarding them made solely based on automated processing.

(2) Contractual liability may also be applicable to AI systems in certain cases. For example, if you purchase a smart fridge and it malfunctions, damaging high-value food items, the consumer may be able to claim compensation. This would be indicated in the terms and conditions in the product documentation and could also include breach of warranty.

(3) Another form of liability that may be applied to AI systems is negligence, which can likely be applied to the widespread use of AI products because “liability in negligence arises when there is a duty of care.” This, however, does not address or solve the issue of who is responsible for said duty- the most prominent issue in this dilemma. Negligence can be applied to software products in general.

For example, when a software is proven defective or causes injury, negligence would be applied rather than criminal liability- which is discussed in the next option. Three elements must exist for a negligence claim to prevail: “the defendant had a duty of care, the defendant breached that duty, and that breach cause an injury to the plaintiff.”

This can also apply to AI products because manufacturers definitely have a duty of care to the customer and could breach that duty in many ways. The third point is often debated- does the AI system recommend an action or actually take action? This has the potential to be adapted to AI-based systems.

(4) The extent to which an AI system could be given a legal personality could influence how vicarious liability and criminal liability are determined. Although it currently seems impossible to attribute agency of the concept of personhood to AI-based products, a person’s liability can be extended beyond their own actions. This could present the opportunity to hold the individuals or company in charge accountable for the system’s actions through vicarious liability. For example, in a similar way that an employer may be responsible for the actions of an employee. In the UK, a company can be held liable for ‘corporate manslaughter’ under the ‘Corporate Manslaughter and Corporate Homicide Act 2007.”

(5) Insurance presents an appropriate avenue for AI products that would normally be insured. For example, the “Automated and Electric Vehicles Act 2018” outlines how insurance can be applied to autonomous vehicles.

(6) Another form of relevant liability is animal liability as a result of the Animal Act 1971 which states than an animal’s owner if liable for resulting damage. This could be applied to physical forms of AI (for example, robots) and would be a form of strict liability with no need to prove negligence or intent.

(7) Products liability presents an adequate starting point for addressing the liability of AI-based products. This legal regime has three main ‘triggers of liability:’ manufacturing defects, design defects and failure to duly instruct of warn consumers. These would have to be significantly adapted in order to address the foreseeability present in AI products.

(8) AI-based products could also potentially be considered under the ‘abnormally dangerous goods’ liability regime. This could be applicable to certain AI systems that are considered abnormally dangerous by law or when AI-based products are used in environments where potential harm is relatively high. This could include AI-based medical equipment treating a patient without human supervision.

(9) The perpetrator-via-another legal model states that if a crime is committed by a mentally deficient person or a child then the perpetrator is not held liable and is considered an innocent agent due to the lack of capacity to have a mental intent. The more relevant part of this is that if said perpetrator was instructed by another person, that person is held criminally liable. If this model is transferred to be applicable to AI systems, then the AI would be considered the innocent agent and the software developer or the user could be criminally liable for the offense.

(10) The natural-probable consequence legal model is used to prosecute accomplices to a crime if the accomplice was aware that criminal activity was taking place. In this model, users or developers could be held liable if they knew that a criminal offense was a possible consequence of their programs structure or use of an application.To provide an example this could be applied to, an employee of a motorcycle factory in Japan was killed by an AI robot working in his vicinity. The robot identified the employee erroneously as a threat and pushed him into a nearby operating machine. If the developer or user was aware that such an event could occur with the robot, they could be held criminally liable.

(11) The legal model of direct liability is a form of negligence (see above) and attributes criminal liability directly to the AI system.

It is important to note that this is a review of current policies that have the potential to be applied to AI systems. It does not include ideas for entirely new liability models that will likely have to be developed as more and more AI products reach commercial maturity. There are still many questions to consider behind each of these options and none of the approaches truly address the question of who is responsible behind the mechanism chosen to handle the incident and resulting claim.

Challenges

Significant tension exists between stifling innovation and the independence of technology companies in contrast to implementing effective liability regulations. Although it is safe to assume that all relevant actors value safety, companies are concerned about the implementation of potentially stymying regulations. For example, in the US, laws likely to apply to autonomous vehicles are “those that deal with products with a faulty design.”

However, this approach impedes the development of the technology because settlements for product design cases are generally much (almost ten times) higher than human negligence cases.

Basic US tort law states that: “a bad state of mind is neither necessary nor sufficient to show negligence; conduct is everything.” This approach deals with the issue as it would with a human driver and could offer more flexibility to the developers and manufacturers.

As preliminarily outlined above, placing the issue exclusively in the negligence category has its own challenges as well. In order to be held criminally liable in US law, there would have to be both an actus reus (an action) and a mens rea (a mental intent).

The issue with applying this to an AI system lies within the mens rea category. Who’s mental intent matters in this instance? The developer, the manufacturer, the operator, the insurer? This decision would have to be made in order to effectively apply this legal regime to AI.

It is relatively straightforward to assign an actus reas to an AI system, but attributing a mens rea presents greater uncertainty. In some cases, for example in strict liability offenses, intent to commit the crime is not considered. This would be directly applied to hold an AI system criminally liable.

Speeding is considered such an offense and criminal liability could be directly assigned to the AI system driving the car while speeding. However, even in this case, the AI cannot serve prison time and the AI cannot pay a fine, so the issue with finding a physical perpetrator for the harm or crime continues. This becomes more complex in cases where intent is required.

Tort law rarely compensates for damages that cause purely economic harm. It generally limits itself to affording damages to property and personal injuries. This presents a challenge because AI-based programs and devices will eventually be integrated into most everyday activities and tort law could struggle to cover these types of economic losses.

Another challenge is that often certain issues are multidimensional. Liability overlaps with other ethical dilemmas concerning artificial intelligence. In addition to autonomous vehicles, AI-based chatbots also present a case for developing an adequate liability regulatory regime. Although no actual harm was caused in these examples, the incidents display the potential ramifications these bots could have should they gain any influence.

For example, in March 2016, Microsoft discontinued Tay, its chatbot after the bot began a series of racist and misogynistic tweets. The chatbot was intended to adapt and learn conversational skills via analysing twitter. Additionally, two bots launched by Facebook, which were supposed to reach autonomous bargaining skills, “began interacting in an unintelligible manner.”

Facebook was forced to shut down the bots. This illustrates the potential dangers around not being able to decipher how an algorithm made a decision, adding further challenge to determining liability. Additionally, this issue crosses the bounds of just liability and enters into the space of free speech.

Another considerable trade-off in attributing liability occurs in the preventative stage. In order to reduce the risk of harm, certain choices could be made at the design stage to make the AI more foreseeable.

It is possible to embed certain basic principles that could act as ground rules for the system. For example, a ‘do not kill’ may be implemented, but there would have to be some kind of limit to the effect and number of these rules because too many would make every act foreseeable.

Thus, this would make the whole concept of implementing machine learning redundant. A certain degree of unpredictability has to be accepted, but balanced with basic preventative principles. Herein lies the challenge.

Beyond the challenges discussed above, it is also important to consider the limitations that impact both AI systems and human experts. This is a significant consideration in order not to over-regulate AI-based products and stymy technological development. Knowledge changes very rapidly which requires the processor, in this case the human of the AI system, to have the most accurate information to prevent mistakes or potential harm.

For example, if an automated vehicle has a broken sensor, it may make a mistake based on inaccurate information or if a user fails to install a software update to their home speaker after a data breach, their information may be stolen. In cases like this, it could be the responsibility of the manufacturer to provide a method for updating the system’s knowledge base frequently.

As this section highlights, there are many associated challenges with determining who is liable in the event an AI system or device causes harm. Given these challenges, it may be difficult to establish a new model for liability preventatively and it may end up being developed in a reactionary manner, after an AI-based program or device does actually cause some kind of harm.

In this instance, it will be important to develop the definition of harm to include the potential new characteristics described in the relevant section above.

Looking Forward

This post has examined several approaches, as well as the contingent arising challenges, to the legal issue of liability in the context of AI systems and devices. With examples from the UK and US legal ideals and systems, it consequently addressed the significant challenges posed in developing a liability framework for AI-based systems and devices.

In order to effectively implement an appropriate and flexible liability regime, several important trade-offs will have to be considered by all relevant actors. A nimble balance between safety and innovation will be necessary in order for the liability regime to enable the continuous development of new technologies and the fiscal success of new companies entering the market.

An effective design will be necessary in order to implement the required policy changes and understand how these new forms and models of liability and risk will affect firms’ goals, strategies and future technological progress.

There will be an important role for legal professionals and regulators in the near future to create new models that will more adequately address the needs of the new paradigm, rather than just applying current quasi-relevant policies in the form of precedent.

Thanks for reading this publication’s first post!

--

--

Jessica Lis
In The Ether

London-based, passionate about all things tech & space policy