AI and the Question of Explainability

Arianna Dorschel
Luminovo
Published in
12 min readJun 10, 2019

--

Do we need to understand our models in order to trust them?

Photo by Émile Perron on Unsplash

The development of artificial intelligence and neural networks has greatly impacted the field of Computer Science on the one hand, but it has also drawn significant attention to the field of Neuroscience. The concept of artificial neural networks is inspired by the biology of neurons, in that artificial neurons were designed to rudimentarily mimic how neurons take in and transform information.

As a Neuroscience student, I was initially attracted to neural networks from the perspective of understanding our own cognition better via computational models. However, the fundamental qualitative differences between human and artificial intelligence make it very challenging for us to understand a model’s decision making process in human terms.

AI that is currently in use operates as a “black box” — we can test levels of accuracy on a test set, and then deploy our model on new data with a certain confidence in its outputs, but if challenged, we would struggle to actually explain a model’s decision in individual instances. As a response, the field of explainable AI (XAI) has emerged in research, which aims to develop techniques where the process leading to a model’s output can be understood by humans. This is a challenging task with many hurdles to overcome, which leads to the…

--

--

Arianna Dorschel
Luminovo

computational neuroscience; previously at st andrews, incoming at ETH