Explainable AI — A Trust Issue

Jochem Klinkhamer
KIN Research
Published in
5 min readAug 16, 2020

you will come
whether I like it or not
here to transform
every certainty that I’ve got

far more intelligent
than your creator could grasp
how will you treat me
when you’re no longer
one of us.

I wrote this poem during my first week of internship, when I started my five month thesis research on Explainable Artificial Intelligence (XAI). It encapsulates my thought process at that time. Fascinated and worried about the future, I had educated myself on the topic of AI. The more I learned, the more I understood how far we still are from reaching ‘general AI’ — an artificial agent that can apply its knowledge across many contexts. In simple words: we are still far from creating something that can learn in the same way as we humans do. But, if we do manage to create general AI, it will be difficult to align its values with ours. How will we ensure that AI will harm us under no circumstances?

Explainability, a key to developing Responsible AI

According to developers, making AI explainable could be the first step in ensuring that AI will not harm us. If we want to align human values (whatever those may be is a different debate) with those of an AI, we should understand its line of reasoning, its rationale. Understanding the rationale of AI systems (i.e. algorithms) can be challenging due to the possible black-box character of these systems. Therefore, developers suggest that making AI systems more explainable should be one of our most prominent concerns — both for pushing the development of AI forward and ensuring that AI is not dangerous through developing ‘responsible AI’. So, to act on my worries and fascinations, the field of XAI was a good place to start.

Explainability: the degree to which a human can articulate the trained model or rationale of a particular decision (Mittelstadt et al., 2016).

Developing more explainable and therefore responsible AI should be one of our most prominent concerns, but is difficult due to the black box nature of AI algorithms

Thesis research in Explainable AI

Current XAI literature is dominated by theoretically-driven research, focusing on the technical implications of explanation giving. In simple words: XAI research is focused on what techniques can be used to give an explanation.

In my research, I took an empirical approach to talk directly to developers; a novel approach within XAI research. My aim was to suggest new lines of inquiry. I interviewed 12 developers asking them how they considered explainability when developing machine learning models.

Building trust with stakeholders via explanations

My results suggest that there are other contextual, persuasive factors that influence how developers are considering explanations next to objective, technical descriptions described in the XAI research. Through this research, my thesis supervisor, Ella Hafermalz, and I coined the term “explanations as persuasive rhetoric”, which represents the other contextual, persuasive factors developers consider when giving explanations.

Developers told me that they mainly use explainability to establish trust with their stakeholders. In doing so, they often engage in explanation giving that is aimed at reassuring a stakeholder, rather than communicating factual information.

“I was talking about machine learning, but later I saw they [stakeholders] understood better when I would talk about pattern recognition.”

In this example, the developer emphasizes the importance of giving non-technical explanations. When he used simple, recognizable terminology, the stakeholder was more likely to understand the AI model, which ultimately resulted in a higher level of trust.

In another example, a developer emphasized the importance of treating her stakeholders as individuals with their own motivations.

“[…] everyone has their own story, which has to be taken into account. I am not a change manager, but I did experience that people need to be motivated intrinsically. They need to be comfortable with it before you can propose novel things.”

Again, reassuring stakeholders could in such a situation be more effective than communicating factual information. An example could be to host a feedback session during which stakeholders can voice and address their doubts regarding the AI model.

Two directions for further research

A multitude of these kind of examples made me conclude that next to objective explanations, developers use explanations as persuasive rhetoric, aimed at reassuring their stakeholders and to establish trust. This conclusion resulted in two recommendations for new lines of inquiry within the XAI literature.

Contextual and simplified explanations

First, the importance of contextualizing explanations based on the background of stakeholders should be explored. For instance, when a stakeholder has a non-technical background, using an objective, technical explanation is not understandable and therefore does not suffice. In this situation, developers must use persuasive terminology that is understandable by the stakeholder. An example would be using emotional rather than rational arguments.

“I experienced that educational background… so whether you did MBO, HBO or WO matters. When someone is old, like 60, but has a university degree, then they are more easily convinced with rational arguments. The other person was also old but had a vocational degree, so with him I had to make emotional arguments. I really had to reassure him. So that was a clear difference.”

Alternative avenues to provide explanation

Second, understanding explanation giving as an objective act, focusing on explanations as technology-driven tools, has its limitations. Developers told me that they often consider explanation types that move beyond the technical tools dominating current XAI literature. An example is developers using color-coded visualizations in order to make the output of their model more interpretable for end-users.

In conclusion, establishing trust with stakeholders is essential to ensure the acceptance of AI models, and thereby the success of AI projects. Developers must frequently give explanations aimed at building trust and adapted to the language of managers and other stakeholders. With this research, I’ve tried to provide some guidance on what explainability means, what it constitutes, and why it is important toward developing Responsible AI.

About Jochem Klinkhamer

The influence of digital technologies on people and their daily experiences fascinates me. It reflects two sides of my personality: my affection for personal emotions, and my interests for innovative, impacting technologies. In my work, I’d like to utilize both sides, while from a practical side being able to work with people.

Areas in which I’d like to work are: Talent & Organization, Learning & Development, and Leadership Coaching. To translate those areas into tangible projects: agile transformations, drafting persona roles, hosting workshops to work with a novel technique. At the end of the day, I want to talk to people, to understand how certain developments are changing either their personal or professional lives.

Connect with me on LinkedIn.

--

--