Combining neural networks and decision trees for accurate and interpretable computer vision models (and how our method works).

This is an extended version with an expanded methods description of the Towards Data Science article “What Explainable AI fails to explain (and how we fix that)”.

Designed by author⁰

The interpretability of neural networks is becoming increasingly necessary, as deep learning is being adopted in settings where accurate and justifiable predictions are required. These applications range from finance to medical imaging. However, deep neural networks are notorious for a lack of justification. Explainable AI (XAI) attempts to bridge this divide between accuracy and interpretability, but as we explain below, XAI justifies decisions without interpreting the model directly.

What is “Interpretable”?


Neural networks are accurate but un-interpretable. Decision Trees are interpretable but inaccurate in computer vision. We have a solution.

Designed by author⁰

Don’t take it from me. Take it from IEEE Fellow Cuntai Guan, who recognizes “many machine decisions are still poorly understood”. Most papers even suggest a rigid dichotomy between accuracy and interpretability.

Explainable AI (XAI) attempts to bridge this divide, but as we explain below, XAI justifies decisions without interpreting the model directly. This means practitioners in applications such as finance and medicine are forced into a dilemma: pick an un-interpretable, accurate model or an inaccurate, interpretable model.

What is “Interpretable”?

Alvin Wan

PhD in AI at UC Berkeley, focusing on small neural networks in perception for autonomous vehicles. Big fan of cheesecake, corgis, Disneyland. https://aaalv.in

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store