Why 90 % ML Projects fails to have long lasting impact??

Rohit Malhotra
5 min readMar 21, 2023

--

INTRODUCTION

“You may have worked very hard on your data science project, but didn’t receive the expected response from stakeholders. This could be due to the fact that you were unable to effectively communicate the value that your project brings to them. Many data science practitioners face this issue, which can be easily explained by the success of the Generative AI model, ChatGPT. ChatGPT received a phenomenal response from various domains because it offered an intuitive advantage that allowed people to easily engage with it and accomplish their tasks.

So, the important question is how to share your model results with stakeholders in an interpretable way that can engage them and gain their trust for long-term results. The answer lies with Explainable AI or Interpretable AI.”

WHAT IS EXPLAINABLE AI (XAI)?

Explainable AI (XAI) refers to the development of artificial intelligence (AI) systems that can be easily understood and interpreted by humans. The goal of XAI is to create AI systems that can provide clear explanations of their decision-making processes, so that humans can understand how and why they arrived at a particular decision.

This is especially important in situations where the decision made by the AI system could have significant consequences, such as in medical diagnosis, autonomous driving, or financial trading. If an AI system makes a decision that is difficult to understand or explain, it can be difficult for humans to trust the system, which can limit its usefulness and adoption.

XAI research is focused on developing AI systems that can provide clear and interpretable explanations of their decision-making processes. This involves developing new algorithms and techniques for interpreting and visualizing the output of AI models, as well as designing user interfaces that can effectively communicate these explanations to humans. The goal of XAI is to make AI systems more transparent, trustworthy, and accessible to a wider range of users.

HOW TO USE EXPLAINABLE ML/AI IN YOUR PROJECT

There are several libraries available to use Explainable Machine Learning (XAI) techniques in different programming languages. Some of the popular libraries are:

InterpretML: A Python library that provides a suite of tools for training interpretable models and explaining black-box models.

Lime: A Python library for generating local explanations for machine learning models using perturbation methods.

SHAP (SHapley Additive exPlanations): A Python library for computing Shapley values and other feature importance measures for machine learning models.

AIX360: An open-source toolkit for understanding and interpreting machine learning models. It includes explainability methods such as contrastive explanations, causal analysis, and fairness metrics.

H2O.ai: A suite of tools for building and interpreting machine learning models, including tools for feature importance, partial dependence plots, and model inspection.

IBM AI Fairness 360: A Python library for detecting and mitigating bias in machine learning models using a variety of fairness metrics.

ELI5 (Explain Like I’m Five): A Python library that provides simple explanations of machine learning models using feature weights and examples.

DALEX: A Python and R library for explaining machine learning models using a variety of techniques, including feature importance, partial dependence plots, and individual conditional expectation (ICE) plots.

Shapash Library Dashboard

TWO ASPECTS OF XAI

Local interpretability and global interpretability are two different aspects of Explainable AI that refer to the scope of the explanation provided by the AI system.

Local interpretability refers to the ability of an AI system to provide an explanation for a specific prediction or decision. In other words, it explains why the AI system made a particular decision for a given input instance. This is useful when trying to understand how the AI system is making decisions on individual data points. Local interpretability techniques include methods such as LIME (Local Interpretable Model-Agnostic Explanations) and SHAP (SHapley Additive exPlanations).

Global interpretability, on the other hand, refers to the ability of an AI system to provide an explanation of its overall behavior and decision-making process across all input instances. This is useful when trying to understand how the AI system works as a whole, including the relative importance of different input features and how they interact with each other. Global interpretability techniques include methods such as decision trees, rule-based models, and linear models.

Both local and global interpretability are important for different reasons. Local interpretability helps in understanding individual predictions and can help identify biases or errors in the model. Global interpretability helps in understanding the overall behavior of the AI system and can help in building trust with users, especially in applications such as healthcare, finance, and legal.

WHY EXPLAINABLE AI MATTER ?

It is crucial for an organization to have a full understanding of the AI decision-making processes with model monitoring and accountability of AI and not to trust them blindly. Explainable AI can help humans understand and explain machine learning (ML) algorithms, deep learning and neural networks.

ML models are often thought of as black boxes that are impossible to interpret.² Neural networks used in deep learning are some of the hardest for a human to understand. Bias, often based on race, gender, age or location, has been a long-standing risk in training AI models. Further, AI model performance can drift or degrade because production data differs from training data. This makes it crucial for a business to continuously monitor and manage models to promote AI explainability while measuring the business impact of using such algorithms. Explainable AI also helps promote end user trust, model auditability and productive use of AI. It also mitigates compliance, legal, security and reputational risks of production AI.

Explainable AI is one of the key requirements for implementing responsible AI, a methodology for the large-scale implementation of AI methods in real organizations with fairness, model explainability and accountability.³ To help adopt AI responsibly, organizations need to embed ethical principles into AI applications and processes by building AI systems based on trust and transparency.

In my forthcoming article I will share practical demonstration how to use these libraries in data fetched from the Combined Cycle Power Plant.

Thanks for reading.

Keep Learning.Keep growing.Keep on trying to make things better then present.

You can contact me via my Linkedin.

Loved reading the article? Become a Medium member to continue learning without limits. I’ll receive a small portion of your membership fee if you use the following link, with no extra cost to you.

--

--

Rohit Malhotra

Passionate to Utilize Capabilities of Data Analytics to Improve Performance of Industrial Assets. https://www.linkedin.com/in/rohitmalhotra67/