Explainable AI with Google Cloud Vertex AI

How to interpret and understand ML models with explainable AI (XAI)

Sascha Heyer
Google Cloud - Community

--

Explainable AI is as the name already suggests it’s all about explaining predictions our models are making. It helps you to understand why your machine learning model is making certain decisions.

Useful to prove if the model is focusing on the right patterns or to find issues like bias or wrongly trained models.

Jump Directly to the Notebook and Code

All the code for this article is ready to use in a Google Colab notebook. If you have questions on how to train the models I used in this article, please reach out to me via LinkedIn or Twitter.

What does it look like with different types of data?

Explainable AI is still an active research area, luckily there is already a large toolset and a number of methods you can use to get explanations for image, text, and tabular data.

For images, explainability AI methods can highlight regions or pixels that influenced the prediction most.

--

--

Sascha Heyer
Google Cloud - Community

Hi, I am Sascha, Senior Machine Learning Engineer at @DoiT. Support me by becoming a Medium member 🙏 bit.ly/sascha-support