The Importance of Explainable AI

ODSC - Open Data Science
Predict
Published in
4 min readSep 17, 2018

AI algorithms can be trained to perform many disparate tasks, but these systems often are opaque and operate in a black box, meaning users don’t always know how decisions are being made. AI-powered systems, frequently using deep learning methods, can be given extraordinarily complex tasks and can make great predictions on a wide range of subjects, but the question is how valuable are these predictions if we don’t understand the reasoning behind them? It’s for this reason that AI algorithms need to have explanatory capabilities in order for users to understand why certain decisions were made. These capabilities have come to be known as explainable AIor XAI.

The Case for Explainable AI

XAI is the ability of algorithms to explain their reasoning and characterize the strengths and weaknesses of their decision-making process. XAI might even convey a sense of how the algorithms will behave in the future. The importance of this new capability is showing up in discussions of corporate governance and regulation, and the subject has many legal, social, and ethical implications.

AI is now used to find predictive patterns in ever-expanding enterprise data repositories. Companies are using these findings to augment existing decisions and incrementally improve business outcomes. By deploying predictive models in a way that the decision outcomes could be monitored and managed to ensure accuracy, decision-makers can learn from the model and gain confidence that it was providing a noticeable improvement.

Such progress is viewed as a natural evolution of AI, but explainability is key. It’s important there is assurance that business outcomes left in the hands of AI are understandable and auditable. In many industries, explainability is often a regulatory requirement for companies employing such models. In addition, with the General Data Protection Regulation (GDPR) now in effect, companies will be required to provide consumers with an explanation for AI-based decisions.

Enterprise leaders have a desire to explore alternative possibilities and investigate the business outcomes of their investments in technology, but today there is a level of uncertainty in leveraging AI models to accomplish this goal. XAI affords the next key step in the evolution of more sophisticated applications. The future can be unlocked if AI models are injected into current business processes to enhance outcomes. The day is coming when AI models can replace the traditional advanced analytics process and become adaptive as businesses continue to evolve, provided these models can be governed and directed by human interaction. XAI will facilitate this transition.

Pioneers of XAI

One pioneer in the field of XAI is financial services giant FICO that has been working on the problem for more than 25 years. This effort led to the FICO innovation where an AI model can continuously improve from expanding data sources while still offering transparency into why and how the model came to the conclusion it did.

Another company that is dedicated to XAI is Optimizing Mind. Dr. Tsvi Achler is the company’s Founder and Chief Science Officer. He has a unique background with a Ph.D. in neuroscience, and an MD. He focuses on the neural mechanisms of recognition from a multi-disciplinary perspective, and has done extensive work in theory and simulations, human cognitive experiments, animal neurophysiology experiments, and clinical training.

Here is a video that demonstrates the company’s methods and tools that help to understand what the AI is doing:

https://youtu.be/vKf0WnYEjrU

Hot Research

Today, XAI is a hot field of research. The term was first used in 2004 in a research paper that describes an AI-based military simulation system that includes an XAI feature. The objective of continued advancement of XAI research is to ensure that an algorithm can explain its rationale behind certain decisions and explain the strengths or weaknesses of that decision.

One application area of AI that could benefit greatly from XAI research is autonomous vehicles. The goal would be to get the driverless car to explain its actions after the fact — to explain the exact steps and decision-making process that led to it performing the way it did. This would assist the developers in debugging any problems that might arise.

For further reading, there also is a research effort to determine just why deep learning networks work so well in the first place. A group from Harvard and MIT wrote a compelling research paper that looks at this question from a physics perspective. The authors include Max Tegmark who wrote the New York Times bestseller “Life 3.0 Being Human in the Age of Artificial Intelligence.”

Conclusion

With so many different approaches to AI — deep learning, convolutional neural networks, recurrent neural networks, and transfer learning — it’s getting increasingly difficult for humans to determine how machines are making their decisions. However, with the advent of XAI, we might just be one step closer to making machines accountable for their actions, in the same manner that humans are.

Read more data science articles at OpenDataScience.com.

--

--

ODSC - Open Data Science
Predict

Our passion is bringing thousands of the best and brightest data scientists together under one roof for an incredible learning and networking experience.