Sitemap
TDS Archive

An archive of data science, data analytics, data engineering, machine learning, and artificial intelligence writing from the former Towards Data Science Medium publication.

Interpreting Machine Learning Models Using Data-Centric Explainable AI

Learn about data-centric explanation and its different types in this article

8 min readFeb 26, 2023

--

Press enter or click to view image in full size
Source: Pixabay

Explainable AI (XAI) is an emerging concept that aims to bridge the gap between AI and end-users, thereby increasing AI adoption. XAI can make AI/ML models more transparent, trustworthy, and understandable. It is a necessity, especially for critical domains such as healthcare, finance, and law enforcement.

To get an introduction to XAI, the following 45 minutes presentation of mine from the AI Accelerator Festival APAC, 2021 will be very helpful:

Popular XAI methods, such as LIME, SHAP, Saliency Maps, etc., are model-centric explanation methods. These methods approximate the important features used by machine learning models to generate predictions. However, due to the inductive bias of ML models, an estimation of important features considered by the predictive models might not always be correct. Consequently, model-centric feature importance methods may not be very useful always.

--

--

TDS Archive
TDS Archive

Published in TDS Archive

An archive of data science, data analytics, data engineering, machine learning, and artificial intelligence writing from the former Towards Data Science Medium publication.

No responses yet