Uncertainty is more important to AI than explainability
Someday in the near future, a primary care physician will sit down in front of a computer to pull up all of the medical information about the patient who just walked into her office. The information won’t just be dumped to the screen as a big table of numbers. It will be thoughtfully curated and presented by artificial intelligence to help guide the physician towards the best diagnosis and treatment for her patient.
One way to make A.I. more palatable is to use systems that can explain why they make a particular prediction. The idea is that the physician can take the reasoning into account. She will be more likely to follow a recommended treatment if the evidence used by the algorithm came from a recently published systematic review than if it came from a horoscope. That makes sense.
But, restricting technologies to those that are easily explainable may be putting our algorithms in handcuffs. Human biology is incredibly complex and it is likely that complex algorithms will be required to make sense of it. If that is true, then the physician may end up disregarding all recommendations of the explainable A.I. because it simply doesn’t work.
This vision for A.I. enabled personalized medicine is quite alluring. Patients receive better care at lower cost. Doctors caring for patients instead of combing through charts. Everybody wins — unless they lose.
Machine learning systems make mistakes. They pass along any biases from the data they were trained on to the problems to which they are applied. It is inevitable that A.I. systems will make mistakes in health care, and these mistakes will have real — potentially life-threatening — costs. As a result, many people are concerned about the role that “black box” machine learning models will play in the future of medicine.
Uncertainty is an alternative to explainability that places no constraints on the complexity of the underlying algorithm. That is, algorithms should provide more than just a prediction — they need to describe how confident they are in that prediction. After all, isn’t that all we wanted in the first place? For the physician to know how much she should trust the algorithm?
A.I. and machine learning are going to play an important role in the future of medicine. Luckily, we already have the tools to build models that can describe their degree of confidence, even if they can’t explain why.