Understanding Human-Centered Explainable AI (XAI)

What is Human-Centered Explainable AI (XAI)?

Explainable AI (XAI) or Interpretable AI, is artificial intelligence (AI) whose decisions and predictions can be understood by humans. It contrasts with the “black box” concept in machine learning where even its designers cannot explain why an AI arrived at a specific decision. AI ccan be largely divided into two categories: Objective AI and Subjective AI. XAI, my topic of discussion, belongs to the latter. It is much more advanced version of AI than Objective AI, also a major key technology in my media artwork ‘eXplainable Human.’

(left) Objective AI (right) Subjective AI; XAI

Subjective AI demands Explainability.

As our society has become more complicated, we have begun to deal with personal information and assets based on customer trust in the fields of finance, insurance, and medical care. Beyond simply classifying images or predicting numbers, AI is now being created for “making clear judgments”. As a result, there is a need to develop AI and algorithms to ensure fairness, reliability, and accuracy. In other words, it is necessary to confirm the basis for deriving results generated from their AI systems and the validity of deriving processes.

DARPA, Defense Advanced Research Projects Agency

eXplainable Human with XAI

Artificial intelligence is being introduced in many specialized fields such as finance, medical care, and education, but it seems that artificial intelligence has not yet reached our inner minds and understandings of our self-image. Can AI explain the human ego? With this question in mind, my team and I planned an installation of interactive media artwork.

Can the ego be explained? The exhibition started with this simple question. It seems impossible to describe ‘the self’ in one sentence. Unexplained uncertainty has also affected the field of AI, and the concept of eXplainable AI (explicable artificial intelligence) has emerged to solve this problem. This shows the human will to pursue more reliability by explaining the reasons and processes that AI uses to produce results.

Here’s another question. Then, is everything that can be explained reliable? We choses a most difficult topic of the ‘self’ as our object explanation and tried to see if it could be better understood through dialogue with AI. We also want to experience the coincidence and conflict that occurs between the ‘self’ described by AI and the ‘self’ described by audience members.

Rule-based Data Model

The AI implemented in this exhibition asks questions for the audience to introduce themselves, collects the audience’s answers, and extracts and interprets keywords. After that, AI infers the audience’s collective ego by reconstructing sentences based on the interpreted content.

To build this AI, as a data architect and IxD designer, I was the first runner in this marathon of a project. I constructed three-stage system of questions that our AI could use to understand people.

The first stage is demographic questions such as “gender,” and “age.” The second is sociological questions about friendship and hatred in this world. It is more radical than the first stage. The third stage involves more confidential questions. Our AI asks audience what secrets they cherish and they trust AI. The answers to these questions are incorporated into our own rule-based matrix, and a line of sentences expressing each person’s self-image is derived.

At the heart of this project, we use the “GPT-3” language model to extend this simple line of extracted sentences. Generative Pre-trained Transformer 3 (GPT-3; stylized GPT·3) is an autoregressive language model that uses deep learning to produce human-like text. Given an initial text as a prompt, it will produce text that continues the prompt. We aimed to give our audience members “reasonable” answers by using 172.5 billion parameters. Examples we used are as follows.

Our original result (Rule-based matrix)

You are a person who has experienced a lot of hardships, and is interested in everything in the world, but is lonely.

Extended result (GPT-3)

You have experienced many hardships in your life and as a result you are interested in everything that is happening in the world, however despite your keen interest in the world around you, you feel a deep inner loneliness. This inner loneliness is something that you have always felt and it is something that you feel even more strongly now. You long for close relationships with others but you find it difficult to connect with people on a deeper level as a result you often feel isolated and alone.


The audience will receive an answer as if they were listening to a perfect explanation for themselves at a fortune-telling shop, or with a slight unpleasant feeling of uncanniness. Can AI really explain us? Answering that unexplained question was the only purpose of our exhibition.



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Hyeji Han

Hyeji Han

Product Designer who values many multi-faceted factors of design — technology, entrepreneurship, and social good.