Member-only story
Interpreting Machine Learning Models Using Data-Centric Explainable AI
Learn about data-centric explanation and its different types in this article
Explainable AI (XAI) is an emerging concept that aims to bridge the gap between AI and end-users, thereby increasing AI adoption. XAI can make AI/ML models more transparent, trustworthy, and understandable. It is a necessity, especially for critical domains such as healthcare, finance, and law enforcement.
To get an introduction to XAI, the following 45 minutes presentation of mine from the AI Accelerator Festival APAC, 2021 will be very helpful:
Popular XAI methods, such as LIME, SHAP, Saliency Maps, etc., are model-centric explanation methods. These methods approximate the important features used by machine learning models to generate predictions. However, due to the inductive bias of ML models, an estimation of important features considered by the predictive models might not always be correct. Consequently, model-centric feature importance methods may not be very useful always.

