Member-only story
Model Interpretability
Explainable AI: Part One — A Short Introduction
Author: Helena Foley, Machine Learning Researcher at Max Kelsen
The road to the practical application of Machine Learning (ML) to medical data has been a long one and finally, the end may be in sight! It has already been successfully applied to things such as stained tumour tissue microarray (TMA) samples (Bychkov et al, 2018), whole-slide images (Ehteshami Bejnordi, 2017), and skin cancer images (Haenssle et al, 2018) with great success in terms of diagnostic accuracy, and at times, even outperforming clinical experts! However, before it can be truly implemented in the wider field of healthcare and have the ability to change people’s lives, we must first cross our Ts and dot our Is on its transparency. This blog is a first of two that will give you a short introduction to the world of explainable AI.
Traditional models, such as linear regression, are intuitive and highly explainable. However, their utility with genomic data is limited due to the data’s high dimensionality (we’re talking each sample having tens or even hundreds of thousands of features!). If the sample size is sufficiently large, more advanced models, such as convolutional neural networks (CNNs) can produce more accurate results (Akkus, 2017). However, these advanced…