CodeX
Published in

CodeX

Peeking into the Black Box — From explainable AI to explaining AI — Part 3

A black box labeled AI on a desk with glasses, a small statue and an e-notebook.
AI as a black-box © 2021 Henner Hinze

“By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.”

– Eliezer Yudkowsky (Decision and AI theorist)

AiX Design turning black box AI-systems into insightful tools.
AiX Design © 2021 Henner Hinze

Part 1 of this series introduced the concept of explainability of AI-systems as a vital component of AiX Design (the design of the experience with AI-systems). Part 2 then took a closer look at trust as one of the primary goals of explainability.

This article presents the essential considerations to not only make AI-systems explainable but to actually explain them: Who demands an explanation? For what purpose do they need an explanation? and What are characteristics of good explanations?

Only a human-centered approach to explainable AI will enable product managers, designers, user researchers, and engineers to produce AI-systems that are useful to non-AI experts in their daily life and work.

As Miller (2019) states in “Explanation in Artificial Intelligence: Insights from the Social Sciences”:

“XAI [eXplainable AI] is a human-agent interaction problem.”

From this, explainable AI should draw strongly from psychology and cognitive science as well as design practices. Yet, explainable AI is too often taken as a mere collection of engineering techniques (for example, SHAP, Lime, ICE, etc.). Without considering psychological and social factors, these techniques are prone to fail to produce useful explanations in practice. Researchers with a background in social sciences together with product designers must provide expertise to ensure the inner workings and predictions of AI-systems are explained appropriately enabling all stakeholders to make informed decisions.

Other articles in this series

Part1: Peeking into the Black Box — A Design Perspective on Comprehensible AI
Part2: Peeking into the Black Box — Trust in AI

When explainable AI does not explain

As humans, we look for explanations to improve our understanding — to derive a stable model that we can use to make better predictions about the world around us. Therefore, we tend to ask for explanations whenever we observe something unexpected or abnormal. Surprising events show us that our mental models (i.e., our “theories” about the world that guide our behavior and decisions) are incorrect or at least incomplete.

A complete list of potential causes for an event, however, poses a cognitive burden too high to be practical. Thus, we tend to expect an explanation to present a limited set of causes that are actionable for us.

Miller (2019) describes explaining as consisting of three processes:

  1. Understanding potential causes
  2. Selecting the “best” causes
  3. Transferring the explanation from the explainer to the explainee

To contrast, Figure 1 shows a chart that is produced using a popular explainable AI technique called SHAP. It presents a list of all the factors that have been used to predict the risk of a heart attack for a patient, and the influence of those factors on the prediction. By Miller’s model, this only uses the process of understanding potential causes (1) but does not select the “best” causes (2), which makes it at best an incomplete explanation.

Figure 1: SHAP to explain the average influence of features the prediction of risk for a heart attack. Reproduced from Al’Aref et al. (2019).

Assuming a doctor and a patient would want to devise a strategy for intervention to prevent a heart attack, a major issue becomes apparent: Many factors are simply not useful. Age, sex and ethnicity are the strongest predictors for a heart attack but none of them can be affected. Further, many of the factors used are correlated (they influence each other), such as being a past smoker makes it more likely to be a current smoker, weight and height determine the body mass index, etc. A doctor must put in much effort for additional interpretation. In many other use cases, an expert may not be available, and users are left to guess at what to make out of automated predictions.

It becomes clear that good explanations need a robust understanding of their context respecting psychological and cognitive factors. The two questions to consider first are Who is the explainee? and Why are they asking for an explanation?

Explaining to Whom?

To create a good explanation, it is crucial to account for what the explainee knows or thinks to know. There is not a “one size fits all”. The various roles involved in creating and using AI-systems clearly bring different skillsets, experience, constraints, and incentives for why they might want explanations.

Tomsett et al. (2018) propose seven roles involved in the creation, usage, and maintenance of AI-systems:

  1. Creator-Owner
    Represents the owner of the intellectual property in the AI-system — managers, executive board members.
  2. Creator-Implementer
    Direct implementer of the AI-system — data scientists, developers, product owners, subject matter experts.
  3. Operator
    Gives input to the AI-system and retrieves its predictions — domain expert, user of the model.
  4. Executor
    Makes decisions based on the AI-system’s predictions — domain expert, user of the model.
  5. Decision subject
    Is affected by a decision based on the prediction of the AI-system.
  6. Data subject
    Persons whose personal data has been used to train the AI-system.
  7. Examiner
    Audits and investigates the AI-system — regulatory entity or corporate auditor.

Each of these roles may have different reasons and objectives why they are asking for an explanation of an AI-system.

Explaining to what purpose?

Although we might demand an explanation just to satisfy our curiosity, in most cases we have a pragmatic reason: We want to close a gap in our mental model to enable ourselves to make better decisions. The purpose of an explanation tells us how it will be utilized down-stream and how it integrates in a larger workflow. This in turn will inform how to select an appropriate explanation and how to present it.

Based on Arrieta et al. (2020), this list gives some common purposes for explanations of AI-systems to consider:

  • Trustworthiness (for acceptance)
    Can the AI-system be expected to act as intended when facing a given problem? (This is currently a major goal of XAI. See the previous article in the series for a closer look at Trust in AI.)
  • Causality
    What relationships in the data might be causal and warrant further investigation? This is crucial to devise interventions to prevent an AI-system’s prediction to manifest. E.g., predicting the likelihood of developing cancer might be less useful than identifying carcinogenic factors to avoid to decrease that likelihood.
  • Transferability
    What are the boundaries of the AI-system’s models? How different is a problem from the one the AI-system has been created for and will the system still be useful to solve it?
  • Informativeness
    How does the problem solved by the AI-system integrate in the decision process? What is the context to make its output maximally useful?
  • Confidence
    Can a user determine when to be confident in an AI-system’s output and when to override its output by their own judgement?
  • Fairness
    Is the output of the AI-system fair or is it biased in an undesired way?
  • Accessibility
    Can non-AI experts understand the AI-system sufficiently to get involved in its development and improvement to represent their own objectives? This may be a major factor to achieve social acceptance of the system.
  • Interactivity
    How can a user adapt their interaction behavior to make maximal use of an AI-system? To be able to investigate scenarios by tweaking the AI-system its user must have sufficient understanding of its working.
  • Privacy awareness
    What information has been captured by the system and is that privacy sensitive? Can sensitive information be extracted by unauthorized actors?
  • Prevention of manipulation
    Owners and users of an AI-system want to understand if the model can be manipulated by malicious actors to produce unwanted outputs. In this context, explanations could become harmful as they might enable understanding necessary for exploitation. A practical example is the easy manipulation of traffic signs to trick self-driving cars into perceiving different signs.
Images of street signs that through almost imperceptible changes are dangerously misclassified by a machine learning algorithm.
Figure 2: Almost imperceptible changes cause traffic signs to be misclassified. (Sitawarin et. al, 2018)

What makes a good explanation?

Obviously, it is not enough to just produce any kind of explanation. Developers of AI-systems should strive for good explanations. But what determines what a good explanation is? Besides considering for whom the explanation is and what its purpose is, there are general qualities humans apply to create good explanations.

Miller (2019) proposes a few criteria to create good explanations from a human-centric perspective:

  • Contrast and compare (“Good explanations are contrastive”)
    In most cases, people demand explanations to some unexpected, surprising event. An explanation is supposed to correct someone’s mental model that caused them to predict wrongly. This requires contrasting and comparing the surprising event with the one they expected. The “best” causes are those that account for the relevant differences in both events.
  • Select actionable causes (“Good explanations are selected”)
    Good explanations are useful explanations. People do not expect and usually cannot process a complete chain of causes as an explanation for an event. They select one or two causes based on cognitive biases and heuristics. To be useful, causes that are controllable make better explanations. Some criteria applied are abnormality (unusual, unexpected causes are better), temporality or timeliness (more recent causes are better), intentionality (Intentional actions make better explanations. For example, “I was late for work because I took a detour to my favorite donut shop.” is a better explanation than “I was late for work because there was a traffic jam.” if both are causes.), social norms (Socially inappropriate actions make better explanations. For example, “I was late for work because I took a detour to my favorite donut shop.” is a better explanation than “I was late for work because I had to take my kids to school.” if both are causes.)
  • Statistics do not explain (“Probabilities probably do not matter”)
    Since many of the AI-systems requiring explanation are based on probabilistic statistics, there is a temptation to expose probabilities as explanations. This is not how humans explain. Take this example: A student only receives 50% in their exam. The explanation of the teacher that all (or most) of the students got 50% is not satisfying. At least the reason everybody did badly must be explained.

When not to explain

Being aware of the relevance of explainable AI-systems, we also need to understand that explanations are not always necessary and can even be counterproductive in some circumstances. Thus again, product developers should not employ explanatory techniques without understanding the context in which an explanation is consumed.
These are some examples for situations in which explanations might not be desirable:

  • Low-stakes decisions and well-understood problems
    The cost of understanding an algorithm can outweigh the benefits. (Bunt et al., 2012)
  • User expectation is already met
    Seeing intermediate outputs and details can lead users to question a system even if it is correct and its output matched their expectation. (Springer & Whittaker, 2018)
  • Information overload
    Too much information negatively impacts human decision accuracy and trust. (Poursabzi-Sangdeh et al., 2021)
  • Risk of “Gaming the system”
    Well-explained systems can be easier to manipulate (e.g., credit score).

Concluding thoughts

While current XAI techniques that identify potential causes for predictions by AI-systems are necessary, they only cover a portion of the explanatory process. Explainability of AI-systems requires more than engineering. It needs to account for insights from social and cognitive sciences, best design practices, and the explainee’s knowledge, expectations, and context. Since users’ understanding of an AI-system changes over time, explaining, rather than as a one-directional, static feature, should be seen as an interactive, conversational process.

Only if explainability is a part of the development from the conception of an AI-system, can the system be successfully designed into the social context it will be used in.

This article closes the three-piece introduction to explainable AI from a design perspective. Please comment, and share your thoughts and expertise on the topic!

Follow me on Medium! to not miss the upcoming articles on digital product design and the role of design in Artificial Intelligence.

Henner has a background in design and computer science and loves to think and speculate about AI futures and emergent technologies. He also creates digital products.

Follow on Medium!
Connect on
LinkedIn!

Related Stories

References

  1. Al’Aref S, Maliakal G, Singh G, van Rosendael A, Ma X, Xu Z, Al Hussein Alawamlh O, Lee B, Panday M, Achenbach S, Al-Mallah M, Andreini D, Bax J, Berman D, Budoff M, Cademartiri F, Callister T, Chang H-J, Chinnaiyan K, Shaw L (2019). ‘Machine learning of clinical variables and coronary artery calcium scoring for the prediction of obstructive coronary artery disease on coronary computed tomography angiography: analysis from the CONFIRM registry.’ European Heart Journal, iss 41(3), pp 359–367, European Society for Cardiology (ESC).
  2. Arrieta A B, Díaz-Rodríguez N, Del Ser J, Bennetot A, Tabik S, Barbado A, Garcia S, Gil-Lopez S, Molina D, Benjamins R, Chatila R, Herrera F (2020). ‘Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI’, Information Fusion, vol 58, pp 82–115, Elsevier.
  3. Bunt A, Lount M, Lauzon C (2012). ‘Are explanations always important? A study of deployed, low-cost intelligent interactive systems’, International Conference on Intelligent User Interfaces — Proceedings IUI, Association for Computing Machinery (ACM).
  4. Hinze H (2021). ‘Peeking in the Black Box — A Design Perspective on Comprehensible AI — Part 1’, Medium [online], Accessible at: https://medium.com/codex/peeking-in-the-black-box-a-design-perspective-on-comprehensible-ai-9dcb58389e3d. (Accessed: 2 March 2022)
  5. Hinze H (2021). ‘Peeking into the Black Box — Trust in AI — Part 2’, Medium [online], Accessible at: https://medium.com/codex/peeking-into-the-black-box-trust-in-ai-part-2-a0d3819674d5. (Accessed: 2 March 2022)
  6. Liao Q V, Gruen D, Miller S (2021). ‘Questioning the AI: Informing Design Practices for Explainable AI User Experiences’, CHI ’20: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pp 1–15, ACM.
  7. Lim B Y, Dey A K, Avrahami D (2009). ‘Why and why not explanations improve the intelligibility of context-aware intelligent systems’, CHI ’09: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp 2119–2128, Association for Computing Machinery (ACM).
  8. Miller T (2019). ‘Explanation in Artificial Intelligence: Insights from the Social Sciences’, Artificial Intelligence, vol 267, pp 1–38, Elsevier.
  9. Molnar C (2022). ‘Interpretable Machine Learning — A Guide for Making Black Box Models Explainable’, leanpub.com.
  10. Poursabzi-Sangdeh F, Goldstein D G, Hofman J M, Vaughan J W, Wallach H (2021). ‘Manipulating and Measuring Model Interpretability’, arXiv:1802.07810.
  11. Sitawarin C, Bhagoji A N, Mosenia A, Chiang M, Mittal P (2018). ‘DARTS: Deceiving Autonomous Cars with Toxic Signs’, DARTS: Deceiving Autonomous Cars with Toxic Signs, arXiv:1802.06430.
  12. Springer A, Whittaker S (2020). ‘Progressive Disclosure: When, Why, and How Do Users Want Algorithmic Transparency Information?’, ACM Transactions on Interactive Intelligent Systems, vol 10, iss 4, art-nr. 29, pp 1–32, Association for Computing Machinery (ACM).
  13. Tomsett R, Braines D, Harborne D, Preece A, Chakraborty S (2018). ‘Interpretable to Whom? A Role-based Model for Analyzing Interpretable Machine Learning Systems’, arXiv:1806.07552.

Further Reading

  1. Broniatowski D A (2021). ‘Psychological Foundations of Explainability and Interpretability in Artificial Intelligence’, NIST: National Institute of Standards and Technology, U.S. Department of Commerce.
  2. Caruana R, Lou Y, Gehrke J, Koch P, Sturm M, Elhadad N (2015). ‘Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital 0-day Readmission’, KDD ’15: Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp 1721–1730, Association for Computing Machinery (ACM).
  3. Churchill E F, van Allen P, Kuniavsky M (2018). ‘Designing AI’, Interactions: The HCI Innovator’s Dilemma — Special Topic: Designing AI, vol XXV.6, iss November–December 2018, pp 35–37, ACM.
  4. Cramer H, Garcia-Gathright J, Springer A, Reddy S (2018). ‘Assessing and Addressing Algorithmic Bias in Practice’, Interactions: The HCI Innovator’s Dilemma — Special Topic: Designing AI, vol XXV.6, iss November–December 2018, pp 59–63, ACM.
  5. Kahnemann D, Tversky A (1974). ‘Judgement under Uncertainty: Heuristics and Biases’, Science, vol 185, iss 4157, pp 1124–1131, American Association for the Advancement of Science.
  6. Lakkaraju H, Bastani O (2019). ‘“How do I fool you?”: Manipulating User Trust via Misleading Black Box Explanations’, arXiv:1911.06473v1.
  7. Lee J, Moray N (1992). ‘Trust, control strategies and allocation of function in human-machine systems’, ERGONOMICS, vol 35, no 10, pp 1243–270, Taylor & Francis Ltd.
  8. Lindvall M, Molin J, Löwgren J (2018), ‘From Machine Learning to Machine Teaching: The Importance of UX’, Interactions: The HCI Innovator’s Dilemma — Special Topic: Designing AI, vol XXV.6, iss November–December 2018, pp 53–37, ACM.
  9. Martelaro N, Ju W (2018), ‘Cybernetics and the Design of the User Experience of AI-systems’, Interactions: The HCI Innovator’s Dilemma — Special Topic: Designing AI, vol XXV.6, iss November–December 2018, pp 38–41, ACM.
  10. O’Neil C (2017). ‘Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy’, Penguin Random House.
  11. Ribeiro M T, Singh S, Guestrin C (2016). ‘”Why Should I Trust You?” Explaining the Predictions of any Classifier’, KDD ’16: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp 1135–1144, Association for Computing Machinery.
  12. Slack D, Hilgard S, Jia E (2020). ‘Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation Methods’, arXiv:1911.02508v2.
  13. Wong J S (2018), ‘Design and Fiction: Imagining Civic AI’, Interactions: The HCI Innovator’s Dilemma — Special Topic: Designing AI, vol XXV.6, iss November–December 2018, pp 42–45, ACM

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Henner Hinze

Henner Hinze

Speculator, thinker, and curious wonderer about futures and the consequences of AI. I also create digital products.