The Algorithm Made Me Do It, And Other Bad Defenses.

While it may seem obvious, you can’t use an algorithm to make unlawful life insurance underwriting decisions. A recent NY insurance circular spells this out. Moreover, not understanding how or why the algorithm uses data or makes a decision is likely not a defense. In fact, the “it’s a black box to us” argument is a good way to be found liable (in NY at least) of an unfair trade practice in the insurance underwriting context.

This is a quandary. Certain types of algorithms — machine learning and otherwise — make decisions that are based on logic or data analysis that a human can’t understand. We see the decision but we don’t know how the decision was made. On the one hand, we don’t want to get in the way of technology that can make our lives better, but on the one hand we have to recognize that “willful blindness isn’t a defense” and also make sure that in the quest to create a better mousetrap we don’t make one that kills more than mice.[1]

The New York Department of Financial Services frames the issue this way:

The Department fully supports innovation and the use of technology to improve access to financial services. Indeed, insurers’ use of external data sources has the potential to benefit insurers and consumers alike by simplifying and expediting life insurance sales and underwriting processes. External data sources also have the potential to result in more accurate underwriting and pricing of life insurance. At the same time, however, the accuracy and reliability of external data sources can vary greatly, and many external data sources are companies that are not subject to regulatory oversight and consumer protections, which raises significant concerns about the potential negative impact on consumers, insurers and the life insurance marketplace in New York.

As a practical matter, if you build an algorithm that does something that appears illegal, for reasons that you don’t understand, throwing your hands up and saying “it wasn’t our intent” may not get you far. To be defensible, you may need a trap door that allows you to see inside, or a front end and internal rubrics to assure that you haven’t unknowingly built a law breaking piece of software. Again, the NYDFS cautions: “Where an insurer is using external data sources or predictive models, the reason or reasons for any declination, limitation, rate differential or other adverse underwriting decision provided to the insured or potential insured should include details about all information upon which the insurer based such decision, including the specific source of the information upon which the insurer based its adverse underwriting decision.

Until software has its own legal personhood, which I guess will be a while, human makers, users and owners will be liable for black box unlawful behavior. Building algorithms that pass judgment on people will require that we understand that process by which that judgment is made and the data upon which its based. And throwing up your hands and saying “the algorithm made me do it” is probably not going to be a good defense.

These are my opinions only, and may not reflect the views of past, present, or future clients, employers or Palleys. I may change my mind — I contain multitudes. Picture credit: (Pixabay License: Free for commerial use without attribution).

[1] One way to not get too much in the way, perhaps, is to decide what needs to be ring-fenced and what doesn’t. While we want to be sure that an algo isn’t discriminating in underwriting decisions, perhaps we don’t need to know the exact process used to find cures for terrible diseases or win difficult games. See, e.g., (“Several DeepMind researchers have already moved from working on AlphaGo to applying similar techniques to practical applications, said Hassabis. One promising area, he suggested, is understanding how proteins fold, an essential tool for drug discovery.”)