Member-only story
Understanding XAI and EBM
An introduction to Explainable AI (XAI) and Explainable Boosting Machines (EBM)

In recent times, machine learning has become the core for developments in many fields such as sports, medicine, science, and technology. Machines (computers) have become so intelligent that they even defeated professionals in games like Go. Such developments raise questions if machines would also make for better drivers (autonomous vehicles) or even better doctors.
In many machine learning applications, the users rely on the model to make decisions. But, a doctor certainly cannot not operate on a patient simply because “the model said so.” Even in low risk situations, such as when choosing a movie to watch from a streaming platform, a certain measure of trust is required before we surrender hours of our time based on a model.
Despite the fact that many machine learning models are black boxes, understanding the rationale behind the model’s predictions would certainly help users decide when to trust or not to trust their predictions. This “understanding the rationale” leads to the concept called Explainable AI (XAI).
What is Explainable AI (XAI)?
Explainable AI refers to methods and techniques in the application of artificial intelligence technology (AI) such that the results of the solution can be understood by human experts. [Wikipedia]
How is Explainable AI different from Artificial Intelligence?

In general, AI arrives at a result using an ML algorithm, but the architects of the AI systems do not fully understand how the algorithm reached that result.
On the other hand, XAI is a set of processes and methods that allows users to understand and trust the results/output created by a machine learning model/algorithm. XAI is used to describe an AI model, its expected impact and potential biases. It helps characterize model accuracy, fairness, transparency and outcomes in AI-powered decision making. Explainable AI is crucial for an organization in…