TDS Archive

An archive of data science, data analytics, data engineering, machine learning, and artificial intelligence writing from the former Towards Data Science Medium publication.

Member-only story

Explainable AI: Part Two — Investigating SHAP’s Statistical Stability

Author: Helena Foley, Machine Learning Researcher at Max Kelsen, based on SHAP paper by Melvyn Yap, PhD, Senior Machine Learning Researcher at Max Kelsen

Max Kelsen
TDS Archive
Published in
9 min readFeb 2, 2021

--

Photo by rishi on Unsplash

In our previous blog, we introduced the concept of SHAP values (4) and its advantages over other saliency mapping methods such as LIME (5). We also proposed that the application of this method to genetic data can be used to explore biology and enable clinical utility of deep models. However, given the reliability issues touched on in the previous blog as well as those reported in (1, 9) it is absolutely paramount that we tread carefully before putting our trust in the interpretations of deep learning results. One way to navigate this safely, is through the learned biological relevance of the found features, as well as benchmarking of the results against well established and trusted, traditional bioinformatic methods.

Neural Network Model

To begin, we needed a model and a target hypothesis. For this purpose, we trained a convolutional neural network (CNN; Fig 1) to predict tissue type using RNA-seq data from the Genotype-Tissue Expression (GTEx)…

--

--

TDS Archive
TDS Archive

Published in TDS Archive

An archive of data science, data analytics, data engineering, machine learning, and artificial intelligence writing from the former Towards Data Science Medium publication.

Max Kelsen
Max Kelsen

Written by Max Kelsen

We are an Artificial Intelligence and Machine Learning consultancy that delivers competitive advantage for government and enterprise. https://maxkelsen.com

Responses (2)