What is Explainable AI?

Shubhi Upadhyay
Kigumi Group
Published in
5 min readApr 3, 2023
Photo by Apex 360 on Unsplash

In a previous article, Algorithmic Bias and its Impact on Society, I touched on the concept of explainable algorithms. In this piece, I will delve deeper into the topic by discussing some of the possible applications, techniques behind, and limitations of explainable algorithms.

A traditional artificial intelligence (AI) model can typically only provide an answer, without providing any background information on how it arrived at this answer.

Therefore, the emerging field of Explainable AI (XAI) is focused on explaining how AI algorithms arrive at solutions, which is valuable because it can increase transparency (and thus, trust) in these algorithms.

In this manner, XAI can prove useful and applicable in various fields. For example, in the criminal justice system, using an XAI model to determine the risk of recidivism would enable stakeholders to look at which specific factors informed the model’s decision and the ethicality of these models. Another example relates to the field of healthcare, in which an XAI model could help doctors understand how the model decides whether to diagnose a patient with a disease. With this knowledge, developers can alter machine learning models to address the areas where bias seeps in rather than scrapping the implementation of AI altogether.

Ways to Implement XAI

There are quite a few ways in which XAI can be implemented, but I will focus on three. The first way is the simplest: visualize the data. When there is a smaller amount of data, one can create data visualizations that can be as simple as bar charts and heatmaps to examine the influential factors in particular outcomes. They can then compare these relationships with the model’s predicted associations to potentially explain how the model arrived at its decisions.

The second way is to use an easily traceable, simpler model. There are many different types of machine learning models, and the choice of machine learning model depends on various factors such as the nature of the data being used and desired outcomes. Some examples of models that are easier to understand are linear regression, decision trees, and k-nearest neighbors.

Linear Regression Model

A linear regression model depicts the relationship between a dependent variable and one or more independent variables. When there is only one dependent variable and one independent variable, the relationship is portrayed as a line, hence the term linear. It can be extrapolated to predict outcomes for other independent variable values. Linear regression can be easier to understand because one can easily determine which independent variables lead to the model’s prediction of the dependent variable.

Decision Tree

A decision tree model consists of a root node, branches, and leaf nodes. The model starts at the root node, with each branch representing a different attribute, and the leaf nodes representing the final decision options. Similarly, decision trees are easy to trace because each node, or decision, can be reached by following a path of branches or attributes. Therefore, developers can examine which attributes the model takes into account and considers important when making a decision.

K-Nearest Neighbors

Finally, the k-nearest neighbors model can also be explained, which makes it a suitable choice when implementing explainable AI. It operates off the system of making predictions on an unlabeled datum based on how similar it is to some number of labeled data. The specific number of data is at the discretion of the developer. In this case, it is simple to explain the model’s decisions because it is intuitive: the closer an object is to something else, the more likely it is to be that object. For example, pants that are blue are more likely to be blue jeans than pants that are not blue. Since it is intuitive to understand how a k-nearest neighbors model makes a decision, it is easier for humans to explain the decision-making process as well. While these models are all simpler and interpretable, it is important to note that with they are less accurate compared to models that are more complex and less interpretable.

Feature Importance

The last way is to use a technique called feature importance, which is a technique used to determine which features of the input in a machine learning model will have the greatest impact on its decisions. Essentially, a numerical score is calculated for each feature, representing its impact on the model’s predictions. A higher score indicates that the feature has a greater impact on influencing the model’s ability to predict an outcome. Feature importance scores can help provide a clearer understanding of why a model makes particular decisions by identifying which input features it considers to be the most significant. The use of feature importance can also aid in mitigating bias because it can inform developers when a model is placing significance on sensitive attributes like race and gender, which will enable them to take necessary steps to resolve the issue.

Conclusion

These three approaches show why and how XAI can be beneficial in providing developers and researchers with the tools to mitigate algorithmic bias when it appears, but it does have its limitations. For instance, the aforementioned techniques cannot be used everywhere. In cases when datasets are extremely large, data visualization will be time-consuming and resource-intensive, which indicates that it can only be applicable to smaller models. For example, attempting to visualize the Google search algorithm’s process would not be a feasible task due to the sheer amount of data that are processed every day.

In other cases, developers may choose not to use simple, easily traceable models like linear regression and k-nearest neighbors. In these cases, they may need to use models that rely on implicit knowledge, or knowledge that isn’t easy to articulate or is subconscious. This means that they are less transparent because it is difficult to state the exact reasoning that the model used in its decision-making process, which can make it harder to address biases. However, these models are often considered more accurate because they are more complex and contain more parameters and layers. Thus, developers are confronted with the dilemma of striking a balance between model accuracy and explainability while designing their models. Overall, while XAI shows promise as a way to mitigate algorithmic bias, it is clear that there is still a long way to go.

References

--

--