Introduction to Explainable Artificial Intelligence in medical imaging

https://wallpapercave.com/artificial-intelligence-wallpapers

Artificial intelligence (AI)-based computer-aided diagnostics (CAD) is a promising technique to make the diagnosis process more efficient and accessible to the general public. Deep learning is the most widely used AI technology for a variety of tasks, including medical imaging. It is the state of the art for a variety of computer vision tasks and has been used in medical imaging tasks such as Alzheimer’s disease classification [1], lung cancer detection [2], retinal disease detection [3,4], and so on. Despite the remarkable results, the lack of such tools to inspect the behavior of black-box models affects the use of deep learning in medical field where explainability and reliability are the key elements for trust by the medical professional.

Also, Newer rules, such as the European General Data Protection Regulation (GDPR), are making black-box models more difficult to apply in all sectors, including healthcare, because decision retractability is now required [5].

In such situations, the explainable AI (XAI) can be a promising tool to interpret, interlink and understand the reason behind the decision of the black-box model. The XAI can be defined as follows,

Explainable artificial intelligence (XAI) is a set of processes and strategies that enable human users to understand and trust machine learning findings and output. [6].

Explainability is a fundamental enabler for AI deployment in the real world, since it ensures that technology is used in a safe, ethical, fair, and trustworthy manner. Breaking AI misconceptions by demonstrating what a model looked at while making a judgement can help end-users trust the technology. For non-deep learning users, such as most medical professionals, it’s even more vital to show the domain-specific attributes used in the decision.

Explainability and interpretability are sometimes used interchangeably but they carry subtle differences. The amount to which a cause and effect may be observed within a system is known as interpretability. Meanwhile, explainability refers to how well the underlying workings of a machine or deep learning system can be communicated in human terms.[7]. Consider the following scenario: while performing a physics experiment, you simply need to know the procedures and no comprehensive information on methodologies to comprehend how the experiment is performed. The term to describe an ability to understand something is interpretability. However, as you progress, you’ll need a conceptual grasp for each stage, which is referred to as explainability.

The explainable AI techniques are classified into the following major types.

Model-specific vs model agnostic:

When the parameters of the distinct models are used to explain the decision of a model then it is known as a model-specific interpretation algorithm.

Example: Graph neural network explanation (GNNExplainer)[8]

While model agnostic approaches are primarily used in posthoc analysis and are not restricted to a particular model architecture.

Example: Layer-wise relevance propagation [9].

Global methods vs Local methods:

A single model outcome is suitable to local interpretable procedures.

Example: Local interpretable model agnostic explanations (LIME) [10].

Global approaches, on the other hand, focus on the interior of a model by utilizing all available information about the model, the training, and the data. It aims to describe the model’s behavior in general.

Example: Feature importance methods.

Pre-model vs in model vs post model:

Pre-model methods are self-contained and do not need the usage of the specific model architecture.

Example: Principle component analysis [11].

In-model and post-model approaches are used when the explanations are available within the model or after it has been developed.

Example: SHAP [12].

Surrogate methods vs Visualization methods:

Surrogate methods are made up of a group of models that are used to examine other black-box models.

Example: Decision trees [13].

The visualization approaches are not separate models, but they do aid in the visual understanding of various aspects of the models.

Example: Activation maps.

The X-ray activation map is shown in figure 1, with the focus on the region of interest for accurate Covid-19 diagnosis.

Figure 1 Grad-CAM for three X-ray images diagnosed with COVID-19 pneumonia. The first column shows the original X-ray, the second column shows the overlaid activation map on the original image [15]

Attribution based method of explainable AI:

The purpose of an attribution approach is to figure out how much an input characteristic contributes to the target neuron, which is usually the output neuron of the correct class in a classification task.

Attribution maps are heatmaps formed by arranging the attributions of all the input features in the shape of the input sample. Features that contribute positively to target neuron activation are often highlighted in red, whereas those that negatively effect activation are highlighted in blue.

Deep Learning Important FeaTures (LIFT) [14] and Deep SHapley Additive exPlanations (SHAP) [12] are examples of attribution based methods.

The attribution-based methods are classified as perturbation-based methods and backpropagation-based methods.

Perturbation based methods:

Perturbation is the easiest technique to investigate the impact of modifying an AI model’s input properties on its output. This can be done by masking, deleting, or changing specific input attributes, then conducting the forward pass (output computation) and comparing the results to the original output. It is computationally expensive since after perturbing each group of input features, a forward pass must be run. Occlusion is the gold standard for any attribution study since it is a straightforward model agnostic approach that indicates a model’s feature relevance. Shapley value sampling, which computes approximate Shapely Values by sampling each input feature several times, is another perturbation-based approach.

Backpropagation based methods:

The back-propagation methods compute the attribution for all input features with a single forward and backward transit through the network. These processes must be repeated numerous times in some of the algorithms, however, this is independent of the number of input characteristics and far less than in perturbation-based methods.

The shorter run-time comes at the cost of a poorer link between the output and its variance.

DeepLIFT [14], Smoothgrad [16] are examples of a backpropagation-based approach.

A majority of the medical imaging literature that studied the interpretability of deep learning methods used attribution-based methods due to their ease of use. Researchers can train a suitable neural network architecture without the added complexity of making it inherently explainable and use a readily available attribution model.

The application of the attribution-based method can be found in brain imaging, retinal imaging, skin imaging, breast imaging, CT imaging, X-ray imaging.

In the next post, I will be explaining the mathematical approach behind Attribution based approach of explainable AI.

So stay tuned !!!

References

1. T. Jo, K. Nho, and A. J. Saykin, “Deep learning in Alzheimer’s disease: Diagnostic classification and prognostic prediction using neuroimaging data,” Front. Aging Neurosci., vol. 11, p. 220, 2019.

2. K.-L. Hua, C.-H. Hsu, S. C. Hidayati, W.-H. Cheng, and Y.-J. Chen, “Computer-aided classification of lung nodules on computed tomography images via deep learning technique,” Onco. Targets. Ther., vol. 8, pp. 2015–2022, 2015.

3. S. Sengupta, A. Singh, H. A. Leopold, T. Gulati, and V. Lakshminarayanan, “Ophthalmic diagnosis using deep learning with fundus images — A critical review,” Artif. Intell. Med., vol. 102, no. 101758, p. 101758, 2020.

4. Leopold, H.; Singh, A.; Sengupta, S.; Zelek, J.; Lakshminarayanan, V., Recent Advances in Deep Learning

Applications for Retinal Diagnosis using OCT. In State of the Art in Neural Networks, A. S. El-Baz (ed.); Elsevier,

NY, in press,2020.

5. A. Holzinger, C. Biemann, C. S. Pattichis, and D. B. Kell, “What do we need to build explainable AI systems for the medical domain?,” arXiv [cs.AI], 2017.

6. “Explainable AI,” Ibm.com. [Online]. Available: https://www.ibm.com/in-en/watson/explainable-ai. [Accessed: 20-Mar-2022].

7. “Machine Learning Explainability vs Interpretability: Two concepts that could help restore trust in AI,” KDnuggets. [Online]. Available: http://www.kdnuggets.com/2018/12/machine-learning-explainability-interpretability-ai.html. [Accessed: 20-Mar-2022].

8. R. Ying, D. Bourgeois, J. You, M. Zitnik, and J. Leskovec, “GNNExplainer: Generating explanations for graph Neural Networks,” arXiv [cs.LG], 2019.

9. S. Bach, A. Binder, G. Montavon, F. Klauschen, K.-R. Müller, and W. Samek, “On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation,” PLoS One, vol. 10, no. 7, p. e0130140, 2015. 10. M. T. Ribeiro, S. Singh, and C. Guestrin, “Why should I trust you?: Explaining the predictions of any classifier,” in Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016.

11. S. Wold, K. Esbensen, and P. Geladi, “Principal component analysis,” Chemometr. Intell. Lab. Syst., vol. 2, no. 1–3, pp. 37–52, 1987.

12. S. Lundberg and S.-I. Lee, “A unified approach to interpreting model predictions,” arXiv [cs.AI], 2017.

13. S. R. Safavian and D. Landgrebe, “A survey of decision tree classifier methodology,” IEEE Trans. Syst. Man Cybern., vol. 21, no. 3, pp. 660–674, 1991.

14. A. Shrikumar, P. Greenside, and A. Kundaje, “Learning important features through propagating activation differences,” arXiv [cs.CV], 2017.

15. L. R. Baltazar et al., “Artificial intelligence on COVID-19 pneumonia detection using chest xray images,” PLoS One, vol. 16, no. 10, p. e0257884, 2021.

16. D. Smilkov, N. Thorat, B. Kim, F. Viégas, and M. Wattenberg, “SmoothGrad: removing noise by adding noise,” arXiv [cs.LG], 2017.

--

--

--

Research Scholar at IIT Guwahati

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

So you wanna build a chatbot

Spotify’s Algorithm: Explained

Neuralink Update — July 2021

Biased Facial Recognition — a Problem of Data and Diversity

Machine Behavior Needs to Be an Academic Discipline

Creating New Humans With Generative Adversarial Networks and Deep Learning

Self driving AI terrorizing the great city in NFS RIVALS

EventAlways — How Artificial Intelligence (AI) is Transforming Events in 2020

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Akshay Daydar

Akshay Daydar

Research Scholar at IIT Guwahati

More from Medium

Predicting ride conversion rate with Keras/Tensorflow (Part I — Building the network)

Sustainable AI — Guide for a responsible data scientist

UN SDG Goals

Enhancing the performance of Stacked Autoencoders for classification tasks

Feature Encodings for Gradient Boosting with Automunge