Why isn’t explainable ML catching on?

Why not invest more and understand why the systems give their answer?

Vijay Lakshminarayanan
Galileo Onwards
2 min readDec 2, 2022

--

Question: Machine Learning systems are notoriously opaque. Often nobody knows why they gave the solution they did (for example, an ML system distinguished wolves from dogs based on snow in the background, not the animals themselves). Corporations developing them can create systems that explain themselves by expanding efforts. And yet the market isn’t bringing us such solutions. Why not?
Answer: Because ignorance is a legitimate defense in liability law. Read on for details.

Welcome to Costs Matter, a series that asks different questions all of which have the same answer: to better manage costs. The costs are frequently economic though not always. The series focuses narrowly on the impact of costs. It does not claim these costs are the sole cause. To read more in the series, visit https://medium.com/galileo-onwards/costs/home.

When I used to work at eBay, their patent lawyers told me that under no circumstances was I to search for patents, research into patents, or have anything to do with other patents — owned by eBay or otherwise. The reason, they said, was reduced liability.

Patent law is complicated. Software patents quadruply so. There are many software patents approved by the USPTO (United States Patent and Trademark Office) that are exact replicas of each other. (Google is your friend here. Though I quit eBay almost a decade ago, some habits die hard.)

A duplicate patent that has unwittingly violated an original patent is merely dismissed. If it knowingly violated the original patent then the legal penalties are much higher.

A similar condition applies, it seems, for Machine Learning. According to a former Google Engineer, Blake Lemoine, companies creating Machine Learning algorithms are better off not knowing why their systems work the way they do because that way they have plausible deniability if something goes wrong. He made the above comments, which I’ve quoted in full below, in a discussion with AI Researcher, Gary Marcus. You can read all of it on Gary Marcus’s substack blog.

Lemoine’s comments follow.

Understanding how these systems work has only marginal value to the companies creating them. Negative value in some cases actually.
For liability law it’s much better to not know how the systems work so long as you have plausible deniability about any potential harms they may create.

Generated by the author. License: public domain.

--

--