AI Insurance Is Coming, Here’s Why

Anand Tamboli®
tomorrow++
Published in
12 min readMay 14, 2021

--

AI is not only a powerful tool, but it is also can be a highly risky technology if used incorrectly. No matter how much you take care of, there might be some lurking risk, which is either unknown or uncontrollable, and that means you need a different way of risk-management.

If you perform a thorough pre-mortem analysis, ensure that training and testing are complete, and stress test all the systems with red teams' help, you can be almost confident about the system's performance. However, there might be some unidentified or identified risk but can’t be anticipated or controlled. Such residual risk can be dealt with by way of transference.

Transference of risk transfer is a risk management and control strategy that involves the contractual shifting of a pure risk from one party to another. One example is the purchase of an insurance policy, by which a policyholder passes the specified risk of loss to the insurer.

Currently available options and their limitations

If you have an IT solution, which is the closest equivalent to an AI solution, a few options are already available in the market for transferring some of the risks.

Information Technology Liability (IT Liability) Insurance [1]covers claims arising from the failure of information technology products, services, and/or advice.[2]

The information technology (IT) industry has unique liability exposures due to the crossover between providing professional services and the supply of goods. Moreover, many service providers in this industry have a mix of both. It gets further complicated by the legal ambiguity around software advice and development and whether it is, in fact, the provision of a service or the sale of goods.

Traditional Professional Indemnity insurance policies often have onerous exclusions relating to the supply of goods. In contrast, traditional Public and Products Liability policies often contain exclusions relating to professional services provision.

Many insurers have developed a range of insurance options to address these issues, commonly referred to as IT Liability policies. These policies represent a combination of Professional Indemnity and Public and Products Liability insurances bundled into one product. These policies were developed to minimize the prospect of an uninsured claim due to its “falling between the gaps” between the two traditional insurance products.

However, the added complexity of AI solutions is driven by complex algorithms and the data ingested or processed. Over time, changes in data can significantly change product or solution characteristics. Add to that cloud-based services. It just widens the gap in existing options, and with that, the prospect of falling through the gaps increases significantly.

Hence the AI insurance

Before we even think about AI insurance, let’s see if there is a need for it. This need can only become evident when there are multiple issues with significant complexity.

Since the last half-century, there has been a steady stream of warnings to slow down and ensure we keep machines on a tight leash.[3]

Many thought leaders have asked critical questions such as who accepts the responsibility when AI goes wrong and the implications for the insurance industry when that happens.

Autonomous or driverless cars are the most important considerations for the insurance industry. In June 2016, a British insurance company Adrian Flux started to offer the first policy specifically geared towards autonomous and partly automated vehicles.[4] This policy covers typical car insurance options, such as damage, fire, and theft. Additionally, it also covers accidents specific to AI — loss or damage as a result of malfunctions in the car’s driverless systems, interference from hackers, failure to install vehicle software updates and security patches, satellite failure or outages affecting navigation systems, or failure of the manufacturer’s vehicle operating system or other authorized software, the article stated.

Volvo has said that Volvo is responsible for what happens when one of its vehicles is in autonomous mode.[5]

I think this is an important step. However, still, it fails to answer the question of who is liable for any accidents? Who is at fault if the car malfunctions and runs over someone?

When autonomous machinery goes wrong in a factory and disrupts production, who is responsible? Is it the human operator who has thin-threaded control, or is it the management for buying the wrong system? Maybe it should be the manufacturer for not testing the autonomous machinery thoroughly enough.

We need to establish specific protections for potential victims of AI-related incidents, whether they are businesses or individuals, to give them confidence that they will have legal recourse if something goes wrong.

The most critical question from a customer’s standpoint would be, who foots the bill when a robot or an intelligent AI system makes a mistake, causes an accident or damage, or becomes corrupted? The manufacturer, developer, the person controlling it, or the system itself? Or is it a matter of allocating and apportioning risk and liability?[6]

Drew Turney, a journalist, argues in one of his articles, “We don’t put the parents of murderers or embezzlers in jail. We assume everyone is responsible for his or her decisions based on the experience, memory, self-awareness, and free will accumulated throughout their lives.[7]

There are many examples where complex situations have occurred, which begets a need for AI insurance.

AI loses an investor’s fortune

Early on, Austria-based AI company 42.cx developed a supercomputer named K1. It would comb through online sources like real-time news and social media to gauge investor sentiment and make predictions on U.S. stock futures.[8] Based on data gathered and its analysis (based on that data), it would then send instructions to a broker to execute trades, adjusting its strategy over time based on what it had learned.

In May 2019, a Hong Kong real estate tycoon Samathur Li Kin-kan decided to sue the company that used trade-executing AIs to manage his account, causing him to lose millions of dollars. A first-of-its-kind court case opened up that could help determine who should be held responsible when an AI stuffs up.[9]

While it is the first known instance of humans going to court over investment losses triggered by autonomous machines, it also highlights AI's black-box problem vividly. If people do not know how the AI is making decisions, who’s responsible when things go wrong?

The legal battle is a sign of what’s coming as AI gets incorporated into all facets of life, from self-driving cars to virtual assistants.

Karishma Paroha, a London-based lawyer, who specializes in product liability, has an interesting view. She says, “What happens when companies use autonomous chatbots to sell products to customers? Even suing the salesperson may not be possible. Misrepresentation is about what a person said to you. What happens when we’re not being sold to by a human?

Risky digital assistant for patients

In mid-2019, a large consulting firm started deploying voice-controlled digital assistants for hospital patients. The idea was, a digital device would be attached to the TV screen in the patient room, and the patient can request assistance.

The firm wanted to replace the age-old call button service due to its inherent limitations. One of the significant limitations cited was that there is not enough context available for nurses to prioritize patients based on the call request. It is possible, with the age-old call button system, that two patients have requested help, and one of them needs urgent help while others can wait. However, with just the indication of help requests, nurses can’t determine who needs immediate help. With voice-based digital assistants, the firm and hospital anticipated that they could prioritize nurse visits with more context from the command text.

The system was deployed with prioritization based on the words uttered by the patients. For example, if the patient asked for drinking water, that request was flagged as a low priority, whereas if someone complained about pain in the chest, they were flagged with the highest priority. Various other utterances were prioritized accordingly. The idea behind these assigned priorities was that the patient needing water to drink could wait for a few minutes, whereas a patient with chest pain may be at risk of attack and needs immediate assistance. Generally speaking, this logic would work in most of the scenarios.

However, this system wasn’t linked with the patient information database. Besides getting the room number of the requester, the system did not know anything else. Most importantly, it did not understand the patient’s condition and why they were in the hospital.

Not knowing the patient’s condition or ailment may prove to be a severe impediment in some cases. These cases may be at the long tail of the common scenarios, and I think that makes it more dangerous.

For example, if a patient has requested water, the system would put it as a low priority request and give other requests a relatively higher priority. However, what if the patient was not feeling well and hence requested water. Maybe not getting water in time could worsen their condition. It may be that they were choking or coughing and therefore asked for water. Continuous coughing may be an indicator of imminent danger. Without knowing the patient’s ailment and condition, it is tough to determine whether a simple water request is a high priority or low priority. These types of scenarios are where I see significant risks. One may term this system as well-intentioned but poorly implemented.

Now the question is — in a scenario where a high priority request gets flagged as a low priority on account of the system’s limited information access, who is responsible? If the patient’s condition worsens due to this lapse, who would they hold accountable?

Who (or what) can be held accountable for such failures of service or performance is a major lurking question. Perhaps, AI insurance may cover the end-users in all such scenarios when the need for compensation arises?

Many regulators are swiftly gearing up

Since 2018 European lawmakers, legal experts, and manufacturers have been locked in a high-stakes debate: whether it’s the machines or human beings who should bear ultimate responsibility for their actions.

This debate refers to a paragraph of text buried deep in a European Parliament report from early 2017. It suggests that self-learning robots could be granted “electronic personalities.” This status could allow robots to be insured individually and be held liable for damages if they go rogue and start hurting people or damaging property.

One of the paragraphs[10] says, “The European Parliament calls on the commission, when carrying out an impact assessment of its future legislative instrument, to explore, analyze and consider the implications of all possible legal solutions, such as — establishing a compulsory insurance scheme where relevant and necessary for specific categories of robots whereby, similarly to what already happens with cars, producers, or owners of robots would be required to take out insurance cover for the damage potentially caused by their robots;

Challenges for AI insurance

Although the market is tilting quickly to justify AI insurance products, there are still a few business challenges. These challenges are clear roadblocks for any one player to take the initiative and lead the way.

A common pool

As the insurance fundamentally works with a common pool, the first challenge appears to be not having enough quorum to start this pool. Several AI solutions companies or adopters can come together and break this barrier by contributing to the common pool.

Equitability

The second challenge is this common pool being equitable enough. Due to AI solutions and customer base nature, not every company would have equivalent revenue and pose a comparable risk. Being equitable, though it may not be mandatory in the beginning, will soon become an impediment for growth if not appropriately managed.

Insurability of cloud-based AI

In the case of minors (children), parents are liable and responsible for their public domain behaviors. Any wrongdoing by them, parents pay for it. However, as they grow and are termed as adults, responsibility entirely shifts to them.

Similarly, the liability of AI going wrong will have to shift from the vendor to the users over time, impacting the annual assessment of premiums. Any updates to the AI software may bring this a few steps backward for vendors, as there are new AI inputs now. If AI works continuously without any updates, the liability will keep shifting gradually towards end-users.

However, in terms of Cloud AI (cloud-based AI solution), this shift may not happen since vendors would always be in full control.

If customers or AI users supply the training data all the time, then there would be shared liability from the solution and outcome perspective.

Attribution

However, of all, attribution of failure might be the biggest challenge with AI insurance. Throughout this book, several cases have shown us how challenging and ambiguous it can be to ascertain fault contributing factors in the entire AI value chain.

AI typically uses training data to make decisions. When a customer buys an algorithm from one company but uses their training data or buys it from a different company and it doesn’t work as expected, how much at fault is the AI and how much is the training data?

Without solving the attribution problem, the insurance proposition may not be possible.

As I interviewed several insurance industry experts during the last year, all of them insisted that a good insurance business model demands a correct risk profile and history.

Unfortunately, this history doesn’t exist yet. The issue is that it won’t exist if no one ever takes the initiative and makes the flywheel rotate. So the question now is — who will do it first?

What might make it work?

While doing it first in the tech industry is quite a norm, in risk-averse sectors like the financial industry, it is just the opposite. So, until that happens, there might be an intermediate alternative.

How about insurers covering not the total risk but only the damage control costs? For example, if something goes wrong with your AI system and causes your process to come to a halt, you will incur two financial burdens. One would be on account of revenue loss and others to fix the system. In this case, while the revenue loss can be significant, system fixing costs may be relatively lower, and insurance companies may start by offering only this part of the cover.

Insurers may also explore Parametric Insurance to cover some of the known or fixed cost issues.

Aggregators can combine the risks of a cluster of several companies matching specific criteria. They can cover part of those risks themselves and transfer partial risks to the insurers.

Either way, it is not a complete deadlock to get this going.

AI insurance is coming. Here’s why

Implementing an AI solution means you can do things on a much larger scale. If you have been producing x widgets per day without AI, you may end up creating 1000x widgets with AI.

This type of massive scale also means, when things fail, they would also fail magnanimously. The issues 1000x failure can bring upon a business could be outrageous. They typically result in revenue losses and put a burden on fixing them, making alternate arrangements while they are being repaired, and so on.

This scale is dangerous, and therefore having AI insurance would make sense. Additionally, it will also make people more responsible for developing and implementing AI solutions with this option in place. These options would also contribute towards the Responsible AI design and use paradigm.

And more importantly, every human consumer or user would want some level of compensation at some point in time when AI solutions go wrong. They won’t accept it if you say AI is at fault. It is a fair expectation to be compensated.

The question is, who will foot the compensation bill?

This question is the biggest reason why I believe AI insurance will be necessary. It may be a fuzzy concept for now but will soon be quite relevant.

Better risk management is the key

Getting AI insurance may be a good idea, but that shouldn’t be your objective. Instead, you must focus on a structured approach for development and deployment. By doing it, you can minimize risks to such an extent that it (AI solution) is safe, useful, and under control.

If you follow my three core principles of good problem solving, i.e., doing the right thing, doing it right, and covering all your bases, it will help.

Given that almost everything you would have mitigated or had planned to do so, there would be hardly anything that would qualify as residual risk. Managing residual risk is more like a remaining tail when the entire elephant has passed through the door.

If, however, there is any uncertainty or unknown risk to your solution, risk transfer should be more effortless as you would have completed the required due diligence already.

It is always a good idea to deal with problems and risks in their smallest states. Stich in time saves nine!

[1] https://www.cna.com/web/wcm/connect/b7bacbf0-b432-4e0c-97fa-ce8730b329d5/RC_Guide_RiskTransferStrategytoHelpProtectYou+Business_CNA.pdf?MOD=AJPERES

[2] https://www.bric.com.au/information-technology-liability-insurance.html

[3] https://channels.theinnovationenterprise.com/articles/paying-when-ai-goes-wrong

[4] https://www.theguardian.com/business/2016/jun/07/uk-driverless-car-insurance-policy-adrian-flux

[5] https://fortune.com/2015/10/07/volvo-liability-self-driving-cars/

[6] https://www.theaustralian.com.au/business/technology/who-is-liable-when-robots-and-ai-get-it-wrong/news-story/c58d5dbb37ae396f7dc68b152ec479b9

[7] https://bluenotes.anz.com/posts/2017/12/LONGREAD-whos-liable-if-AI-goes-wrong

[8] https://www.insurancejournal.com/news/national/2019/05/07/525762.htm/

[9] https://www.bloomberg.com/news/articles/2019-05-06/who-to-sue-when-a-robot-loses-your-fortune

[10] http://www.europarl.europa.eu/doceo/document/A-8-2017-0005_EN.html?redirect

Note: This article is part 6 of the 12-article series on AI. The series was first published by EFY magazine last year and now also available on my website at https://www.anandtamboli.com/insights.

--

--

Anand Tamboli®
tomorrow++

Inspiring and enabling people for a sustainable and better future • Award-winning Author • Global Speaker • Futurist ⋆ https://www.anandtamboli.com