The Challenges of Artificial Intelligence as a Source of Liability
One of the objectives of the Open Insurance Initiative (OPIN) is the design of an API standard that is versatile enough to allow and advance interoperability with a world increasingly driven by IoT, AI and autonomous agents.
As part of the ongoing research into advancing this initiative, a topic which might appear off limits to this work but had continuously aroused my curiosity, was the treatment of liability caused by rogue, AI driven systems.
I believe that despite the importance of the topic it is rather relegated to the sidelines of publicity while we are all glorifying InsurTech or dashing to build the next insurance unicorn. Liability is one area that could actually stun entrepreneurs and slow innovation. This could be economically disastrous.
The following represents my views (I am not a law professional) of the issues arising therefore I stand to be corrected if others have more legal and studied opinions.
Let us first consider the challenges faced by insurers and law professionals in dealing with AI as a source of liability.
Technically, the main challenges stem from the fact that AI technologies are developing at phenomenal pace reflected for instance, in increased autonomous capabilities and the accelerated deployment and integration of such technologies in many everyday gadgets, equipment and vehicles. AI will soon become so ubiquitous that much of the software and equipment we use will be communicating together independently without human intervention.
A robotic or an autonomous system might be referred to as an artificial computational agent. Legally, there is little body of legal research on the treatment of artificial agents and this explains the lack of regulatory guidance on such topic. For example, could existing legal theory be used to establish legal liability of developers of highly autonomous systems? Could they establish criminal negligence or criminal intent to commit a crime? Could some of these advanced highly autonomous robots qualify as legal persons and be punished for their unlawful actions? And how will the doctrine of strict liability apply in these instances?
It should be noted that some risks are becoming better understood in terms of the source of liability, as in self-driven vehicles (they are always designed with safety first in mind) where legal precedents are already being established given the numerous claims and accidents that have accumulated over the past few years. However, there are many other instances where the cause of liability may be very difficult to establish.
For Insurers, great effort is needed to establish the foreseeability of damage that could be caused by different autonomous systems. For example, do developers have knowledge of potential problems (misrepresentation / non-disclosure)? Could these systems use their acquired datasets to evolve into totally different systems with different intentions? What’s the level of protection that these systems have against highly concealed adversarial inputs and data poisoning attacks?
Most present day policies protecting against commercial general liability, auto liability, professional liability and products liability do not address these risks properly.
Is there potential for AI developers to be drawn into criminal or civil courts if their programs/platforms are deemed criminally liable/negligent?
Part of the challenges that AI introduces is establishing who is responsible for the damage or loss when an accident occurs. A prime example is autonomous vehicles, issues could arise in proportioning the amount of responsibility each of the passenger/driver, vehicle manufacturer and the system developers are judged to assume.
If a developer is deemed criminally liable or negligent then recourse may be pursued in both criminal and civil courts. It should be noted that causation, intent and responsibility get increasingly difficult to untangle when AI is involved.
The application of joint and several liability may mean the party with the largest resources end up footing much of the damages awarded, in this case the manufacturer.
How is AI going to affect the insurance industry in the coming years?
Insurers are looking at adapting their insurance policies to protect clients. Liability, property, auto, health and life insurance will need to better address new exposures.
Risk assessment has to continually play catch up with the uncertainties being introduced such as the impact of these new technologies and the potential for catastrophic events.
The medical profession, for example, has introduced new complexities for underwrites of medical malpractice covers. Digital health deploying AI in disease recognition, genetic testing, virtual nursing and surgical robotics introduce risks of mismanaged care due to AI errors and lack of human oversight.
The complexity introduced by AI in applying strict liability will mean new models of liability coverage will likely be adopted. These models will probably assume the focus is severely shifted to the manufacturers and software developers and less on the consumer.
Areas that AI is going to particularly affect insurers
The insurance industry itself is evolving through increased use of ML, VR and AR. The industry sees efficiency and enhanced customer experience as competitive advantages gainable through the use of AI. Many companies are already deploying chatbots, robo-advisors and instant underwriting algorithms to refine underwriting, claims settlement, fraud detection, marketing and customer servicing.
Is legislation keeping up with AI development?
There is no doubt in my mind that there is a growing case for modified legal frameworks due to the shortcomings of existing philosophy and legal theory in accommodating incidents resulting from sophisticated AI agents.
A clear liability framework must be developed to adequately address human interests to ensure justice for those who are harmed by autonomous systems.
While some in the legal profession are conducting research and providing studied opinions on the treatment of myriad of issues relating to liability and independent agencies, I believe initiatives could be setup and financed by stakeholders in the legal, manufacturing and insurance industries to work with policy makers in drafting an improved legal framework.
Thank you for reading my post.
I write about digital transformation and insurance. The white paper of The Open Insurance Initiative sets a new precedent for the insurance industry and is a must read. Please visit https://openinsurance.io to download the white paper.