Demystifying AI: Creating Value Through Explainability

How UX designers can make Explainable AI comprehensive and useful to anyone in an AI-driven world

Niklas Sternagel
Experience Matters
7 min readOct 31, 2023

--

Abstract image of pink layers seen from below, similar to nothern lights

Large Language Models (LLMs) and generative AI are currently riding a wave of immense popularity, from image generation like Stable Diffusion and Midjourney, to chatbots like ChatGPT. These technologies have truly transformed our interactions with artificial intelligence, captivating the attention of UX designers worldwide.

Amid this fervor, it may seem challenging to shift focus towards an essential but less trendy aspect of AI: Explainability of AI, or XAI. Simply put, it’s about unraveling the “how” behind AI outputs. How does an AI-based translation system choose a particular translation? What parameters guide a generative model’s text creation? What’s inside the enigmatic AI black box?

The importance of these questions extends beyond curiosity; it directly affects how we interact with AI in real-world scenarios. For example, in healthcare, understanding how an AI system diagnoses a disease or recommends a treatment is crucial for both doctors and patients. Likewise, in finance, AI-driven decisions on credit approvals, investments, or insurance premiums demand transparency to uncover biases and ensure fairness and legality.

Surprisingly, despite the significance of XAI, it has predominantly received technical scrutiny, with UX considerations often overlooked. This is why I urge UX designers to embrace XAI. Making AI comprehensible to everyone is the key to fully unlocking its potential while catering to user needs. As AI becomes a permanent part of our lives, the need for this understanding has never been more evident.

Unveil insights with Shap Values

XAI is about letting users figure out what made the AI create a specific result. This approach is not new and has been practiced for a long time, primarily within the data science sphere.

The challenge at hand is evident: it is about communicating the influencing factors that lead to a specific output of an AI system in the simplest and most accessible way possible, even to users without any technical background. This shift begins with UX designers taking on the role of data designers themselves.

Additionally, XAI has historically leaned towards a technical perspective, with data scientists defining explainability and developing tools to depict and communicate these influencing factors of AI applications: Shap values.

Shap values are a method for quantifying the impact of individual features on the predictions of a machine-learning model. Shap value diagrams visualize these influencing factors in terms of relevance and weighting. Explaining Shap values and their diagrams isn’t straightforward. This is where the designer’s role as a mediator between the technical and everyday user becomes crucial. To fulfill this role effectively, it is essential for the designer to not only understand the needs of their users but also develop an understanding of technical language, including Shap values, in order to translate the right data story for the user.

Avoid reinventing the “keyboard”

Building upon the understanding of Shap values, designers should resist the temptation to create seemingly new, innovative, and never-before-seen types of diagrams, even if they are theoretically more user-friendly, accessible, and easier to read. Theoretical benefits often clash with practical reality. A/B testing experiences clearly demonstrate that users react with confusion to these new conceptual designs.

This scenario resonates with a story that’s played out repeatedly in the evolution of one of our most familiar everyday tools, whether physical or digital: the keyboard. Mastering the art of typing on this less-than-ergonomic input device comes with a steep learning curve. Over the years, there have been numerous attempts to reimagine the keyboard concept, dating back to its origins in the 19th century. However, even in today’s digital era, keyboards remain prevalent and ubiquitous, serving us from the web to virtual reality.

To avoid reinventing the wheel and potentially confusing users, it is essential to meet them where they are and utilize what they are already familiar with. This approach becomes even more crucial when it comes to conveying complex information and concepts. Since XAI is predominantly found in B2B applications, users in this environment are typically well acquainted with tables and graphs. While the visual appeal of these components may not be particularly exciting or intriguing for designers, it is vital to remember the principle of “you are not the user.” Engage users with what is readily accessible and understandable to them.

Offer unexpected value to users

As the bridge between users and the technical perspective, it’s vital to go beyond the visual presentation and offer users additional conceptual insights. While many XAI methods reveal factors impacting AI application outputs, these examinations can sometimes be superficial.

Consider a financial example to illustrate this point. Imagine an AI application that decides whether to approve or reject credit applications. It’s evident that the applicant’s credit score is a major factor in this decision. However, the credit score can vary, resulting in a positive or negative outcome. Moreover, the type of credit being applied for matters, whether it’s a car loan, a mortgage, or a student loan. Only by conveying this level of fine-grained information can users gain a thorough understanding of the AI application.

Furthermore, creating a supportive narrative is key. Users can grasp the factors influencing AI outputs and their interplay with a conceptual approach. Yet, understanding these complexities through tables and charts can be challenging. To aid users, natural language descriptions can elucidate data in tables and charts, boosting their confidence in interpretation or offering guidance. Integrating Large Language Models (LLMs) takes this a step further, enabling precise, context-specific textual explanations.

Consider the risk of data bias and distorted XAI

When it comes to the topic of trustworthy AI, especially within the realm of Explainable AI (XAI), user trust in AI and the presented data takes center stage in most conversations. Interestingly, most users generally have no reason to doubt AI. Transparent communication is pivotal here as it clarifies the conditions under which the AI operates: How accurate is the AI? How precise is the displayed information? How does the AI assess the data?

Other factors also contribute to building trust. A positive brand image of the AI application developer, coupled with a clear and structured design from the user interface to interaction, plays a significant role. One possible explanation for this phenomenon is that users, even in a B2B context, usually encounter data presented objectively in the form of numerical values. Consequently, the fact that an underlying AI generates the data does not typically give rise to significant distrust among users.

Nevertheless, discussions on trustworthy AI should adopt a different yet highly critical perspective. It is the collective responsibility of those involved in shaping and making XAI accessible to users, from data scientists to UX designers, to prevent users from developing an unwarranted sense of absolute trust in the AI.

Ensuring that the data used by the AI application is free from biases, known as data bias, is of paramount importance. Data bias, in the context of explainability, refers to the potential presence of biases in the training data, which can lead to unjustified decisions and distort XAI. For instance, if a company unconsciously favored male applicants in its hiring process for years, the underlying data’s bias will skew the analysis, disadvantaging female applicants. In such scenarios, users, relying on XAI information, may falsely conclude that female applicants are less qualified than their male counterparts. This conclusion isn’t based on factual differences but rather on the inherent bias in the input data.

Balance trust-building measures and skepticism

It’s also vital to communicate to users that the information generated by the AI application is always an estimate and can never achieve a 100% accuracy due to the inherent challenges of underfitting and overfitting in AI models.

For instance, using a green label to signal high data reliability might boost trust in the AI. However, this approach carries the risk that users might exclusively rely on this information without cross-referencing it with other sources like domain knowledge or expert opinions. This could lead to real-world consequences, particularly in a healthcare AI system diagnosing diseases and suggesting treatments.

Our conducted user research so far has shown that users don’t seem very concerned about data from AI. They use it to solve their daily problems, especially in analytical tasks, where it helps validate or discover ideas without needing to search through lots of data sources. Although there’s a worry that users might only trust this data, they actually compare it with other sources like datasets or expert knowledge, just like they do with any data.

In a nutshell, ensuring the AI remains unbiased and conveying that its outputs are approximations, not infallible, is not about fostering trust, but rather about preventing unwarranted confidence in its reliability.

Build a responsible digital future by empowering users

Artificial Intelligence has become an integral part of our lives, and similar to the emergence of Web 2.0, we are only at the beginning of a fascinating journey that will bring us numerous advancements. In light of this fact, I consider it crucial — and view it as the responsibility of a designer — to embrace this paradigm shift and understand the underlying mechanisms. Only then can we shape the digital landscapes of tomorrow and have a positive impact on the future.

However, we should not underestimate the capabilities of users. As designers, we act as mediators on this journey and have the responsibility to empower users to act responsibly. We must provide them with the tools and knowledge to do so. By doing this, we can ensure that AI is used in a way that aligns with both individual needs and societal values.

The future digital world is in everyone’s hands — and it is up to us to shape it responsibly.

Experience matters. Follow our journey as we transform the way we build products for enterprise software on www.sap.com/design.

--

--

Niklas Sternagel
Experience Matters

Strategic Designer & Holistic Thinker - to move things forward.