Explainable AI: We need to know why algorithms make their decisions

Artificial intelligence (AI) shifts traditional programming from preset rules to where a machine programs its own reasoning. With AI connecting the dots, we can find new ways of solving problems, run more processes at scale, and reduce chances of human error.

With implementations on the rise, AI will unlock the “last mile” of work — the tasks that traditional automation could not address. But can we really trust these machines to guide major business decisions? After all, AI can be prone to unintended biases without the proper data and training. How do we know the machines are making the right choices?

Understanding AI’s reasoning is a top concern for many decision-makers in the enterprise. Without knowing why algorithms make their decisions, we run into risks of poor adoption and mistrust from staff and customers. In addition, in some markets like healthcare, insurance, or banking, adoption is hampered because regulators require automation be explainable.

Traceability drives explainability

We know deep neural nets for instance are quite powerful, but they are more akin to black boxes, because we cannot explain their decisions.

What’s more important though is how an application is built using AI. The ability to break a decision into its components and provide a visible path to the facts behind each component can make the process behind the AI application more transparent. This ability to provide breadcrumbs — a logical map of sorts through the cognitive process — can deliver the assurance that is needed to make the answer more explainable.

For instance we may not yet fully understand the chemical and electric signals that fire off specific synapses traveling through our nervous systems. But we can track the pain signal to the left hand, pull up our shirt sleeve, see the inflammation, and work out if it was an insect sting. The ability to pinpoint the origin of a conclusion drives explainability.

Explainability drives better customer experience, compliance and adoption.

Understanding AI’s logic and reasoning is more than just double-checking the machine’s work. We have to be able to show what went into making that specific decision. For instance, in loan approvals, if an AI-based system recommends denying an application for a small business loan, the application needs to be built in a way that allows a user to follow the decision back to the specific step that triggered the denial; for example a footnote on the balance sheet that indicated higher than acceptable risk.

Rather than just end up with a Yes or No answer — the bank can now click and drill through the hierarchy of the decision path to the actual footnote that led to the answer. Clear visibility into the facts leads to more understanding between all parties, better governance, and a much better customer experience.

Importantly this approach is more compliant, since the exact information that led to an automated decision can now be made available to regulators, rather than a black box algorithm driven answer.

Explaining AI is necessary, not only to respond to regulatory scrutiny, but also to help with adoption. Just like employees and citizens follow leaders that they trust, employees and customers will be more open to adopting AI if they trust the technology.

Key to solving for Explainable AI is in the application design

Much has been written about the explainability issue in AI. The trick to solving for this is not in AI itself but how AI is put to work in the way the application using AI is built. What’s important is breaking any decision down into its component steps. And building the click and drill capability in the application to these atomic cognitive steps that drive a final answer. Designing applications with explainability in mind becomes the key to making it successful.