Understanding Explainable AI

Ron Schmelzer
Cognilytica
Published in
1 min readDec 23, 2019

Read Ronald Schmelzer’s article in Forbes discussing what is Explainable AI (XAI) and why AI implementers are increasingly demanding explainable and transparent systems:

However most of us have little visibility and knowledge on how AI systems make the decisions they do, and as a result, how the results are being applied in the various fields that AI and machine learning is being applied. Many of the algorithms used for machine learning are not able to be examined after the fact to understand specifically how and why a decision has been made. This is especially true of the most popular algorithms currently in use — specifically, deep learning neural network approaches. As humans, we must be able to fully understand how decisions are being made so that we can trust the decisions of AI systems. The lack of explainability and trust hampers our ability to fully trust AI systems. We want computer systems to work as expected and produce transparent explanations and reasons for decisions they make. This is known as Explainable AI (XAI).

Read more in Forbes here.

--

--

Ron Schmelzer
Cognilytica

Managing Partner & Principal Analyst at AI Focused Analyst firm Cognilytica (http://cognilytica.com) and co-host of AI Today podcast.