HCXAI @ CHI 2023 Recap

by Elizabeth Watkins (Intel, US), Carina Manger (Technische Hochschule Ingolstadt, Germany), Justin Weisz (IBM Research, US), Upol Ehsan (Georgia Tech, US)

Elizabeth Anne Watkins
Human-Centered AI

--

With thanks to co-organizers Mark Riedl (Georgia Tech, US), Andreas Riener (Technische Hochschule Ingolstadt, Germany), Hal Daumé III (University of Maryland, College Park, US), Philipp Wintersberger (University of Applied Sciences Upper Austria, Austria), Gonzalo Ramos (Microsoft Research, US)

Human Centered Explainable AI Logo

If AI systems are going to inform whether you get an organ transplant or how much of a home loan you can get, they need to be explainable to everyone, not just the software engineers who created them. While the field of Explainable AI (XAI) has made commendable progress in opening the proverbial “black box” of AI, the prevailing algorithm-centered view overlooks a vital insight: who opens the black-box matters just as much as opening it. As a result of this blind spot, many popular XAI interventions have been ineffective and even harmful in real-world settings.

The field of Human-Centered Explainable AI (HCXAI) has recently emerged to address this blind spot. The central insight of HCXAI is that XAI is more than just “opening” the black box — who opens it matters just as much, if not more, as the ways of opening it. It is crucial to be explicit about the importance of the user because existing work in XAI only implicitly considers the question of, “explainable to whom?” If we fail to understand who our users are, we cannot design useful XAI systems.

The HCXAI workshop series has been held at CHI since 2021. Our workshops have nurtured a vibrant research community with 300+ participants from 18+ countries spanning academia, industry, government agencies, and nonprofits. Within our research agenda for XAI, we seek actionable analysis frameworks, concrete design guidelines, transferable evaluation methods, and principles for accountability.

In 2021, our first workshop was established from the pressing need for user-centered techniques in XAI and served as a platform to start forming a community of diverse researchers, practitioners, and policymakers. In 2022, the conversation matured to address more insidious, fine-grained, and nuanced aspects of XAI, such as how explanations in AI are not a “silver bullet” but rather could be weaponized to achieve dark patterns.

In 2023, our workshop focused on the coming-of-age story of HCXAI and deepened the discourse around operationalizing human-centered perspectives in XAI. For example, given the identification of dark XAI patterns, we now questioned how we could design dark pattern-resistant XAI? Our goal was to examine how human-centered XAI can be operationalized at the conceptual, methodological, and technical levels. We encouraged historical, sociological, and technical approaches to allow for a holistic perspective.

In this article, we summarize the 2023 workshop’s main event and three paper sessions, including New Insights to the User (Session 1), New Perspectives on HCXAI (Session 2), and New Domains (Session 3). We also invite the community to participate in our next HCXAI workshop which will be held at CHI 2024 on Sunday, May 12 (as a hybrid workshop).

Main Event: Bubble Tea & XAI

Image depicting three panelists with the question of, “what happens when a philosopher, a psychologist, and a computer scientist walk into a bubble tea bar to talk about Explainable AI?”
Watch the Main Event on YouTube

We kicked off the workshop with a vibrant discussion in which leading experts from diverse fields convened in the vein of a casual meeting held at a bubble tea bar to explore the intricate world of Explainable AI (XAI). Our panel featured Andrés Páez, a philosopher from Universidad de los Andes, Tania Lombrozo, a psychologist from Princeton University, and Tim Miller, a computer scientist from the University of Melbourne. The session was moderated by Upol Ehsan, the lead organizer and one of the founding members of the workshop whose work coined the term HCXAI and charted its visions.

Andrés Páez brought a philosophical lens to the discussion, highlighting his work on the philosophy of AI since the late 90s. He emphasized the common confusion between justification and explanation in AI, stressing the importance of understanding and the contextual factors that influence it. Páez warned against abandoning the pursuit of explainability in AI due to its perceived difficulty.

Tania Lombrozo spoke about the psychology of explanations, focusing on how both adults and children evaluate and learn from explanations. She raised concerns about the increasing specialization and siloing within research communities, which may hamper interdisciplinary collaboration essential for advancing XAI.

Tim Miller echoed the sentiment on the fragmentation of research communities. He advocated for a closer integration of interpretable machine learning with explainability efforts, urging for a focus on practical application, deployment, and use-case evaluation.

The panelists agreed on the legal and ethical imperatives for providing explanations, especially in contexts where AI decisions significantly impact human lives. A key theme of the conversation was the balance between human-like explanations and the need to avoid misleading or dangerous assumptions by users. Lombrozo emphasized the importance of understanding the recipient’s perspective for effective explanations, while Páez warned against reinforcing misconceptions about AI’s “intentional states.” The panelists concurred that explainability could foster trust and accountability, with Miller highlighting the critical role of accountability in decision-making processes.

The discussion concluded with an exploration of the ethical dimensions of AI, with Páez noting the complexity of navigating the ethics of AI and the importance of philosophical tools in addressing these challenges. Overall, the live discussion underscored the complexity of XAI, the interdisciplinary challenges it faces, and the ethical considerations it invokes.

After this bubble tea bar visit, we then turned to the paper sessions.

Paper Sessions

Session 1: (New) Insights to the User

Watch Session 1 on YouTube

Explainability is often thought of as a necessary requirement for AI systems, but why? What is the purpose of an explanation? The papers in this session considered explanations as a means to an end that served a human need — for actionability and contestability, for insights that are understandable, and for improving power dynamics within marginalized communities. Three papers were presented in this session:

Mansi & Riedl presented a framework identifying how different categories of information included in XAI systems provides users with different kinds of insights. They argued that explanations enable actionability in several ways — they alter users’ mental models by helping them understand the system and how they can act in response to the systems’ recommendations, they enable users to interactively request more information or contest a recommendation, and they enable users to take actions outside of the XAI system, such as reaching out to another person for help or ceasing to use the system

Figure reproduced from Mansi & Riedl (2023) showing different XAI information categories and how they enable mental state actions, XAI interactions, and external actions.

Explanations can only be useful to people when they are understood. Small et al. proposed an evaluative study aimed at assessing how people comprehend different fundamental representations of explanations. They considered three primary formats for how explanations are usually formulated: as a mathematical formulation (e.g. a formula), as a graphical representation (e.g. a visualization), and as a textual summary.

Figures reproduced from Small et al. (2023) showing different representations of XAI explanations.

Their work proposed a study design that evaluated these different kinds of explanation formats against three factors across a diverse set of stakeholders, such as engineers having high AI expertise, consumers having no AI expertise, and auditors having limited AI expertise.

  • Comprehension — did the person understand the information conveyed in the explanation?
  • Confidence — how confident was the person in their understanding?
  • Contentment — how reliable and informative was the explanation?

Finally, Bhallamudi presented a case study of gig workers in India who were fired without warning due to algorithmic changes that were not explained to them. Amongst all the stakeholders in the company — managers, office workers, and gig workers — the gig workers were most affected by the changes as their livelihoods were dependent upon the algorithm making fair judgements. Yet, there was “no formal process in place through which gig workers could access explainability.” Bhallamudi argued that explainability can help shift the power dynamic in these kinds of situations by enabling minority or marginalized stakeholders to become more involved in decision-making processes.

Session 2: (New) Perspectives

Watch Session 2 on YouTube

Building AI explanations around humans provokes both a challenge and an opportunity to integrate insights drawn from other disciplines. The papers in this session brought new perspectives from human-focused disciplines like learning science and structures of accountability. Three papers were presented in this session:

Yurrita, Balayn, and Gadiraju explore how end-users (specifically, decision subjects) can be equipped to contest algorithmic outputs, by focusing on the AI development pipeline. Explanations, they argue, can be more human-centered if they were designed to provide decision subjects with sufficient information about the process of building AI systems, which are rife with human decisions: not data-centric or model-centric information on “what” has happened, but rather, how decisions were made, or the “why” decisions were decided in the way they were along the development pipeline.

Figure reproduced from Yurrita, Balayn, and Gadiraju (2023) presentation, diagramming what it might look like to provide decision-subjects with information about the AI development pipeline.

Cortiñas Lorenzo and Doherty in their presentation get to the root of a different “why” question: why do we make XAI explanations at all? The authors argue that the central goal of explanations is to teach, and in order to teach well, we must understand how people learn. They apply learning theories to XAI and argue that different kinds of explanations can support and leverage different learning theories. They describe broad categories of learning theories, including behavioral, cognitive, constructivist, motivational, and social, among others. For example, the behavioral theory of learning can focus on how “explanations can be used as feedback to reinforce or discourage behavior, promoting learner’s self-regulation,” and that “when adopting social theories of learning, they can be socially situated to facilitate reasoning”
(2023).

Figure reproduced from Cortiñas Lorenzo and Doherty (2023) presentation, diagramming learner-centered XAI.

Anik and Bunt, in their position paper, describe the negative consequences of explanations, including potential information overload, and the negative impact of end users’ tendency to ignore certain types of information. They then describe challenges in conducting this research, in particular limited access to participants with specific skill sets. They conclude with a discussion of mitigation strategies, including design approaches.

Figure reproduced from Anik and Bunt (2023) presentation, describing mitigation strategies for negative consequences of XAI.

Session 3: (New) Domains

Watch Session 3 on YouTube

Scaling explanations across domains is one of the greatest challenges — and greatest opportunities — facing human-centered explainable AI. No matter how robustly an AI solution was built and tested, deploying it into a new domain changes the problem space, changes the surrounding context and organizational structure, and changes the human relationships that will both shape, and be shaped by, the technology at hand. The work presented in the (New) Domains session critically examined the unique explainability needs stemming from the application of AI to novel domains such as the arts, radiology, and manufacturing. Three papers were presented in this session:

Diagnostic medicine, generative music production, and manufacturing all have distinct demands for, and pressures on, explanations. Bryan-Kinns explored how the role of the AI system, the interaction between user and system, and the function of shared common ground, can fluctuate in the dynamic and creative space of music production.

Representation of three different user interfaces for music production.
Figure reproduced from Bryan-Kinns (2023) presentation, with different interfaces designs for music production.

Kaufman and Kirsh explored how diagnostic medicine’s professional reliance on evidence-based classification can be operationalized into the design of explanations to help encourage adoption.

Zhao and Turkay analyzed how domain knowledge and expertise influence end users’ perception of explanations and their decision-making around implementation. Together, these papers underscored the importance of building context into explanations from a variety of angles and through multiple aspects of system design.

Join us for HCXAI 2024!

Our 2023 workshop focused on operationalizing human-centered perspectives in XAI at the conceptual, methodological, and technical levels and provided new insights to the user, new perspectives on HCXAI, and introduced new XAI domains. As we look to the future, our fourth workshop in 2024 will build on the maturation through the previous three installments to craft the coming-of-age story of HCXAI in the era of Large Language Models (LLMs) and beyond.

LLMs have revolutionized the landscape of natural language processing, demonstrating unprecedented progress in comprehending, generating, and predicting human language at a magnitude previously unseen. These models have hundreds of billions of parameters, all acting in conjunction to generate a distribution of possible words to choose from to build a response, word by word. But, even if we had access to all of an LLM’s parameters, could we interpret and explain that model?

We question how XAI assumptions fare in the era of LLMs and aim toward actionable interventions that recognize both the affordances and pitfalls of XAI. We seek actionable analysis frameworks, concrete design guidelines, transferable evaluation methods, and principles for accountability.

The 2024 HCXAI workshop will take place on Sunday, May 12. It will be a hybrid workshop, supporting participation from both local and remote attendees to allow our global HCXAI community to join from all over the world. We hope to see you there! https://hcxai.jimdosite.com/

--

--

Elizabeth Anne Watkins
Human-Centered AI

Elizabeth is a Research Scientist in the social science of A.I. at Intel Labs.