How do you interpret the prediction from ML model outputs: Part 2 — Blackbox Model

T Z J Y
4 min readOct 12, 2021

In my previous post, How do you interpret the prediction from ML model outputs: Part 1 — Classic Models, we briefly go through how to interpret those classic ML model outputs. In this post, I would like to touch a bit on those modern, blackbox models.

Look through Blackbox Models

Unfortunately, a lot of models are not designed to be interpretable. Approaches to explaining a black-box model aim to extract information from the trained model to justify its prediction outcome, without knowing how the model works in details. To keep the interpretation process independent from the model implementation is good for real-world applications: even when the base model is being constantly upgraded and refined, the interpretation engine built on top would not worry about the changes.

Without the concern of keeping the model transparent and interpretable, we can endow the model with greater power of expressivity by adding more parameters and nonlinearity computation. That’s how deep neural…

--

--