The SHAP Values with H2O Models

Chris Kuo/Dr. Dataman
Dataman in AI
Published in
9 min readNov 24, 2021

--

Many machine learning algorithms are complicated and not easy to understand, even though they have rendered an impressive level of accuracy. As humans, we must be able to fully understand how decisions are being made so that we can trust the decisions of AI systems. We need ML models to function as expected, to produce transparent explanations, and to be visible in how they work. Explainable AI (XAI) is important research and has been guiding the development of AI.

Since many data scientists have used the H2O open-source module, why shouldn’t we cover advances in H2O for model explainability? The good news is that H2O has released its model explainability capacity in this H2O document. So in this article, I will demonstrate how to do that with H2O.

This article is a sister article to the following articles. If you have not read any of these articles, I strongly recommend that you reference at least Part I to Part III.

Part I: Explain Your Model with the SHAP Values

Part II: The SHAP with More Elegant Charts

Part III: How Is the Partial Dependent Plot Calculated?

Part VI: An Explanation for eXplainable AI

Part V: Explain Any Models with the SHAP Values — Use the KernelExplainer

--

--