What exactly is Explainable AI (XAI)?

Photo by Lyman Hansel Gerona on Unsplash

AI stands for Artificial Intelligence. It refers to creating a computer system capable of performing human intelligence like speech recognition, language translation, visual perception, and decision-making.

AI systems are made to learn from experience and get better with time, frequently by utilizing algorithms and statistical models. As we live in the 21st century, technology is much more advanced. AI has become an integral part of our lives. We can find AI applications everywhere. Many industries such as healthcare, banking, retail, transportation, and entertainment use AI in different applications. Despite the many benefits of AI, there are also several limitations to it. As we know AI generates results through decision-making. It may produce biases or inaccuracies. Because of this lack of limitations, it may be challenging to grasp how the system decides or to spot biases or inaccuracies. It may lead to havoc in some cases. This is especially worrying in large industries like healthcare, finance, or the military where AI system judgments can greatly influence people and society. Below are some examples of inaccuracies produced by AI:

A biased result produced by ChatGPT
A biased result produced by ChatGPT

Similarly, let’s have another example:

False facial recognition results in the arrest of a Black man

Nijeer Parks, a 31-year-old Black male from Paterson, New Jersey, was charged in February 2019 with theft and attempting to run over a police officer in Woodbridge. Despite being 30 miles away at the time of the event, the police were able to recognize him thanks to facial recognition technology.

Later, Parks was detained for 11 days on allegations of violent violence, illegal weapon possession, shoplifting, and marijuana possession, among others. A face recognition scan of a phony ID left at the site of the crime yielded a “high profile comparison,” according to a police report, which led to Parks’ arrest.

Due to a lack of evidence, the case was dismissed in November 2019. Parks is currently suing individuals involved in his arrest for civil rights violations, wrongful detention, and false arrest.

There are numerous recognized issues with facial recognition technology, which employs machine learning algorithms to identify a person based on their facial traits. In reality, a 2019 study indicated that Asian and Black faces are “much less reliable” to detect using facial recognition algorithms.

Parks is the third person who has been apprehended as a result of erroneous facial recognition matches. The people who were incorrectly identified in each case were Black men.

In the end, even if AI has advanced greatly in recent years, it is still far from perfect. For it to properly emerge as the world’s technological engine in the future, it will be essential to solving its many vulnerabilities.

A false facial recognition match sent this innocent Black man to jail
A false facial recognition match sent this innocent Black man to jail

Another example of the inaccuracy of traditional AI:

Autopilot causes crashes in Tesla vehicles

After a Tesla Model S crashed north of Houston in April of 2021, killing two people, Elon Musk’s Tesla found itself in difficulties. The vehicle crashed into a tree after missing a little curve in the road.

Initial investigations and witness accounts indicate that the driver’s seat was unoccupied at the time of the collision. It is assumed that Tesla’s Autopilot or Full Self Driving (FSD) technology was activated at the time of the collision.

Two men dead after fiery crash in Tesla Model S.

Lack of Transparency, Interpretability, and Explainability in Traditional AI

Other than that, there are also many other dark sides of AI. It lacks transparency, interpretability, and explainability.

Let’s discuss an example:

If a company uses a machine-learning model to make decisions about loan approvals, the system often denies the loan approval. It creates uncertainty as that why the system has failed. The company is unsure about explaining the model. So, traditional AI lacks in this case.

A diagram explaining the use case: we are going to explain a black box ML model used to accept and reject loan applications at a generic bank.

To overcome and reduce all the complexities and limitations in traditional AI models, Explainable AI (XAI) comes into action:

Explainable AI (XAI) comes to the rescue

Artificial intelligence that’s programmed to describe its purpose, explanation, and decision-making process in a way that can be understood by the average person. (Text by Ivy Wigmore on TechTarget)

XAI means the addition of explainability in AI. It helps characterize model accuracy, fairness, transparency, and outcomes in AI-powered decision-making.

Let’s discuss the bank example in the context of XAI: The XAI model may identify that an applicant’s low income and poor credit history were the two main factors that led to a prediction of high default risk. By providing this explanation, the loan officer or underwriter can better understand the model’s decision and make a more informed decision about whether to approve or deny the loan application. (Context by Lada Rudnitckaia & Paolo Tamagnini & Sasha Rezvina on knime)

The XAI focuses on:

Transparency: The method or process used to make decisions can easily be explained and understood.

Interpretability: It means the ability to understand how a model makes its predictions or decisions.

Explainability: It refers to understanding how the machine learning model made its decision for a given input. It determines which elements or characteristics of the input influenced the model’s choice.

Reliability and Trust: XAI focuses to interpret the machine learning models to enhance trust management by allowing human experts to understand the underlying data evidence and causal reasoning.

Improved Decision-Making: By using XAI, organizations can better understand the decisions made by AI systems and ensure that they are ethical and responsible.

Eliminating Biases: Explainable AI techniques to make it easier to identify potential sources of bias.

Techniques for developing XAI

Five Questions to answer when developing XAI:

Five critical questions to explain Explainable AI
Five critical questions to explain Explainable AI

It includes three main methods:

Prediction accuracy: It can be done by running simulations and doing a comparison of the output of XAI with the results of the training data set. One of the important techniques used for Predicting Accuracy is Local Interpretable Model-Agnostic Explanations (LIME) which tells about the prediction of classifiers in machine Learning.

Traceability: It is yet another technique for developing Explainable AI. The technique used in this case is Deep LIFT (Deep Learning Important Features) which shows a link of traceability by doing a comparison of the active neuron with the reference and displays the dependency.

Decision understanding: The first two techniques are based on technology while this technique is based on human needs. Most of people do not trust AI, so to make the XAI work effectively and efficiently we need to train the individuals about the procedures that how the AI make decisions and why.

Challenges and Limitations of Explainable AI:

Lack of standardization: Currently, there is no universally accepted definition of what qualifies as an “explanation” within the scope of XAI. As a result, various XAI techniques may produce diverse types of explanations that are difficult to compare or standardize.

Unsatisfactory results: Furthermore, the goal of XAI is to provide a purposeful and easily accessible explanation, but currently, these explanations are insufficient and cannot adequately aid developer’s efforts to modify the model for satisfactory results.

Complexity: The complexity of AI models is on the rise, which can lead to explanations produced by XAI techniques also becoming intricate and challenging to comprehend. Communicating a complex model in a manner that can be easily understood by individuals who lack expertise in the field can pose a significant challenge.

Trade-off between accuracy and explainability: Balancing accuracy and interpretability can present a trade-off. Models that are more interpretable might not achieve the same level of performance as more complex models, and there may be a threshold where the accuracy cannot be compromised further in favor of interpretability.

Limited Scope: The explanations generated by XAI methods are limited to the particular inputs and outputs that the model was trained on. Therefore, these explanations may not be applicable to new or unforeseen scenarios or inputs that the model has not encountered during its training phase.

Implications of human-machine collaboration and decision-making:

Ever since the advancements of artificial intelligence and explainable ai, there has born a pressing need to address ethical concerns revolving around its use. Tedious, repetitive, and other significant processes are being carried out by AI-regulated systems.

The most important ethical consideration in terms of explainable ai is transparency. That is humans should be able to understand the decisions made by AI-based systems.

Such crucial tasks require the need to establish clear terms of accountability to rightfully tackle the occurring issues. In the context of XAI, when systems are trained on data that is biased, there comes forth the risk that these systems can produce discriminatory outcomes. Moreover, working with machines requires collecting large amounts of personal data, therefore it needs to take proper ethical measures to protect such private information. Important questions rise about the autonomy of the machines and the extent to which they will be allowed to make decisions on their own. Another rising concern seen is that the use of machinery will pledge serious concerns as human beings will be limited to mere inputs for an algorithmic process. It marks the need to find ways that protect human dignity and do not limit humans to mere objects.

How to achieve Trustworthy in XAI:

Achieving trustworthy explainable artificial intelligence demands a combination of technical, social, and ethical considerations.

Transparency: The AI system must be transparent in terms of its decisions the decisions it made. Especially in the crucial areas of life like healthcare, finance, manufacturing, transportation, etc.

Developing Depictable models: AI systems should focus on models that are predictable meaning that the users can understand how the system arrived at a particular decision or point.

Fairness and bias reduction: Biasness and fairness can be reduced by ensuring that training data is diverse and representative. Moreover, the training data should be evaluated for biases and fairness on regular basis.

Human Intervention: AI systems should be able to be subject to human oversight, especially in areas where the stakes are high. Human experts should be able to intercede and decide whether the decisions made by the systems are correct or incorrect. Some possible ways of doing that could be incorporating human feedback into the system, training and educating humans on the system, or conducting regular evaluations and the list goes long.

Accountability: The system should be developed in a way that developers or its operators could be held responsible for system actions.

Practical tips for implementing Explainable AI

The explainable AI (XAI) technique is an approach to designing systems that are understood by humans. Implementing XAI in software development includes the following steps:

  1. The very basic one is to find a technique. There are multiple techniques available but it is advisable to choose the one that is following your goals and that best fits your scenario.
  2. To train the AI model accurately, it is necessary to collect and label your data.
  3. Complex AI models like neural networks are difficult to explain. Use simpler techniques that are predictable or interpretable.
  4. To ensure the reliability and accuracy of your model, evaluate it across defined measures.
  5. Use techniques to explain the behaviour of the trained models.
  6. Conclude explanations in a clear, understandable, and concise form. Try to convey with the help of charts, graphs, or other visualization techniques.
  7. Test your information with users who are unfamiliar with the model. Use feedback from users to further refine and improve the explanations.
  8. Consider ethical concerns regarding the data used to train the model.

Real Life Examples of XAI

Following are some real-life examples of XAI:

  1. Healthcare: Outcomes of patients can be predicted by XAI. These models are trained on the medical history of the patients. The decisions, however, need to be predictable and interpretable for patients and doctors to rely on it. For example, AI algorithms can be used to analyse medical images like MRIs to detect signs of disease or abnormalities.
  2. Customer Service: AI-powered chatbots can provide customer service and support anytime. It primarily focuses on answering common questions and resolving issues of simpler nature.
  3. Image recognition: In the practical field, XAI algorithms help address the concerns of privacy and security in image recognition. For example: for the systems that use facial recognition to recognize images of animals, XAI algorithms can help provide explanations of why particular images were classified as a particular animal.
  4. Natural Language Processing: Different techniques of NLP are used to give natural language processing explanations for AI models’ predictions or decisions. For example, in a system of sentiment analysis, algorithms of XAI are used to predict the behaviour of the system.
  5. Marketing: For example, if a model recommends a certain product, the XAI system could explain that the product is recommended because it is frequently used by customers.

The Role of XAI in shaping the Future of AI

An emerging field of study called explainable AI (XAI) aims to create AI systems that can explain their thought and decision-making processes to humans in a straightforward manner. The role of XAI in shaping the future of AI is significant, as it will help to build trust and confidence in AI systems, make them more transparent and accountable, and ensure that they are used ethically and responsibly. The future of AI can be shaped in several different ways with the help of XAI.

First off, it can contribute to increasing confidence and trust in AI systems. XAI can aid in removing concerns about the dependability and safety of these systems by clearly describing how AI systems function and the reasons behind their choices. Second, XAI can contribute to the ethical and responsible usage of AI systems. Moreover, XAI can facilitate collaboration between humans and AI systems. XAI will be crucial in making sure that these systems are created and used in a way that benefits society as a whole as AI develops and becomes more and more interwoven into our daily lives.

Some future directions for XAI research include:

  1. Human-centric XAI systems designed with the needs and limitations of human beings.
  2. Multi-modal XAI- combining multiple sources of data and information.
  3. Interactive XAI-designing AI systems that can interact with users to provide explanations.
  4. Ethical XAI systems that are designed to be fair, unbiased, and ethical.
  5. (Paka, July 2, 2021) Trust Worthy XAI-developing XAI systems that can be trusted by users.

Developing more transparent, understandable, and reliable AI systems that can be used to solve a variety of real-world issues while also safeguarding the rights and interests of individuals and society as a whole will likely continue to be a major emphasis of XAI research in the future.

--

--