An Explanation for eXplainable AI

Chris Kuo/Dr. Dataman
Analytics Vidhya
Published in
11 min readAug 16, 2020

--

Artificial intelligence (AI) has been integrated into every part of our lives. A chatbot, enabled by advanced Natural language processing (NLP), pops up to assist you while you surf a webpage. A voice recognition system can authenticate you to unlock your account. A drone or driverless car can service operations or humanly impossible access areas. Machine learning (ML) predictions are utilized in all kinds of decision-making. A broad range of industries such as manufacturing, healthcare, finance, law enforcement, and education rely more and more on AI-enabled systems.

However, how AI systems make decisions is not known to most people. Many of the algorithms, though achieving a high level of precision, are not easily understandable for how a recommendation is made. This is especially the case in a deep learning model. As humans, we must be able to fully understand how decisions are being made so that we can trust the decisions of AI systems. We need ML models to function as expected, to produce transparent explanations, and to be visible in how they work. Explainable AI (XAI) is important research and has been guiding the development of AI. It enables humans to understand the models to manage effectively the benefits that AI systems provide while maintaining a high level of prediction accuracy.

--

--