Explainable None Linear Models — Part I : Overview

M. Baddar
2 min readOct 1, 2022

--

Why it works, or doesn’t work !!

For novice ML Engineer/ Data Scientist, usually they think that their job is just finished when they just train a “good” model in terms of accuracy, precision, recall etc..

Wrong ! , your job has just started . For stakeholders (Product-owner, business customer or even normal end-user) they will ask you :

Well , why the model is giving us these predictions / recommendtioan?

Easy to ask, hard to answer. But why? For two reasons
i) It is hard to find the root-cause for a specific recommendation / prediction
ii) It is hard to communicate that in a convincing plain-English manner to stakeholders, esp non-tech ones.

Let me give you some-examples of typical situations/ questions to show how-important this topic matter , esp. to industry/business related ML-Models

Learning-to-Rank context
Customer X was expecting results Y to be in the top position in ranking, why he is getting it in third place . This model recommendations don’t make any sense !

Stock-price forecast context

Stock s is predicted to have a huge downfall in price by time t. Our domain expert says it has never happened to this stock. Your model is garbage !

Now , you have a flavor of why the problem is important , in Part II I will break the problem in two main sub-problems, giving an overview of how to attack each one

References

--

--

M. Baddar

AI/ML Engineer, with focus on Generative Modeling. The Mission is enabling individuals and SMEs applying this technology to solve real-life problems.