How do you interpret the prediction from ML model outputs: Part 1 — Classic Models

T Z J Y
5 min readSep 22, 2021
Source: Here

In past 5 years, we have seen machine learning models getting applied into more and more areas. From infrastructure to financial industry, from online gaming to health care. It’s critical to figure out how the models make the decisions and make sure the decision making process is aligned with the ethnic requirement and regulations.

We have been seeing that the rapid growth of deep learning models pushes the requirement even further. Nowadays, people are very eager to apply the power of AI fully on key aspects of daily life ad try to get a bit sense on how to interpret the complicated models outcome. However, it’s very hard to do so without enough trust in the models or an efficient procedure to explain unintended behaviour.

I like to work on interesting problems on Kaggle, and I have written a post about how to work on Kaggle competition step by step (see the link here). But one of the most challenging thing I am faced with is to interpret models outputs. That’s one of the many reasons I am motivated to write this…

--

--