Making AI Human Again: The importance of Explainable AI (XAI)

QuantumBlack
Sep 12, 2018 · 5 min read

By Principal Data Scientist, Konstantinos Georgatzis & Co-Founder, Simon Williams of QuantumBlack.

As the explosion of algorithms and Artificial Intelligence (AI) continues across business and society, we are already facing ethical, regulatory and business-critical issues around how we use the output from machine learning.

The issue of who evaluates the decisions made by AI — if anybody — is becoming more urgent. As Emeritus Prof David Fisk of Imperial College put it in a recent letter to the Financial Times:

Of course, the history of technology is the story of augmenting human limitations with machinery or tools that enable us to do more than our bodies or minds let us. But are we on the verge of losing control of this vital process?

What could it take to make AI human again

We should never set out to replace humans with machines. Instead, we should set out to make those humans the best they can be — to enable them to achieve super-human performance. This is what we call ‘Augmented Intelligence,’ and it is already providing the most effective ways to ‘make AI human again.’

For example, at QuantumBlack we’ve applied this principle to predictive analytics for business projects already. Our work on Crossrail risk management put ‘explainability’ at the core: not just creating the algorithms that could take vastly complex planning decisions, but also making sure we can explain why the software makes these decisions.

Within this framework, the exponential growth of data and the use of artificial intelligence is already paying off for business. But there are emergent risks of artificial intelligence when implemented without adequate ‘explainability.’ These urgent risks affect ethics, business, regulation and our ability to learn iteratively from AI implementation.

Where we need virtuous circles of AI development, businesses may instead be finding bursts of strong performance followed by dead ends. Limiting the return loop of insight could be the next major bottleneck for AI-enabled businesses: if you don’t fully understand what you’re outputting, you can’t use it to input into the system.

When you invent the car, you also invent the car crash.

AI, particularly in its ‘augmented’ version, is becoming increasingly popular amongst our clients due to the high impact it can generate. But increasingly complex models often lack explainability. They become classic ‘black boxes’, into which data is poured and results emerge — without the business that pays for them fully understanding how the results were generated. The pitfalls of unexplained AI and the ‘black box’ are already causing serious ethical and business problems, as we saw with the members of congress wrongly identified as criminals.

With every technological advance and change comes fear, and the danger of sacrificing truly deep understanding for immediate short-term performance instead. We see major risks when there is a lack of explainability in AI, as occurs when businesses rush to use AI under ‘black-box’ conditions. For example, one company developed facial recognition technology which failed to recognise those of Asian descent, and another sentiment analysis tool labelled terms like ‘black’ and ‘Jew’ negatively while terms like ‘white power’ received positive ratings. One pharma company built a model to optimise the yield of its factories, but its senior engineers didn’t trust the model’s output and refused to apply the proposed calibrations, withholding $10–20M of potential gains.

What is Explainable AI (XAI)?

Image for post
Image for post

The problem we face in the early days of AI adoption is that businesses are focused on metaphorically building Saarinen’s chair, but the regulators and customers that assess the ‘chair’ represent the wider environment, and there is not sufficient linkage between them. The link can only come from a process in which performance of each element is explainable across the full context of delivery.

The key to ethical, successful and profitable use of AI

‘Black-box’ AI risks must be addressed with the help of explainable AI. Successful systems will be the ones that cast the underpinning black-box algorithms into an XAI framework. This provides an explanation for the prediction made by the black-box model, and thereby opens a feedback loop.

Integrating performance and interpretability in a coherent way is the only solution. This means creating a virtuous circle as systems mature:

Image for post
Image for post

Put simply, QuantumBlack defines Explainable AI (XAI) as a framework which increases the transparency of black-box algorithms by providing explanations for the predictions made. This conceptually simple approach — although very difficult to execute — is an essential response to the spread of algorithm-driven business innovation. In light of recent changes such as GDPR, business risk and ethical concerns, XAI is emerging as even more important.

Explanation of model outputs will drive successful adoption among business users, avoid negative ethical outputs, and enable insight generation. Increased transparency is also the foundation for addressing regulatory requirements like GDPR. And understanding the algorithm mechanism enables business users to provide feedback for improvement, creating a virtuous circle.

This is how we believe ‘Explainable AI’ (XAI) addresses the major risks of ‘black-box’ machine learning. But it’s not going to be easy.

But the rewards for those that create a process around explaining and augmenting AI properly, and in the widest context, stand to reap the rewards to creating more ethical AI, that can rise to the intense challenges of business innovation in business and services today.

Simon Williams is presenting at Intelligent Health in Basel, Switzerland today on this topic.

QuantumBlack

We use data, analytics and design to help our clients be the best they can be.

QuantumBlack

Written by

An advanced analytics firm operating at the intersection of strategy, technology and design. www.quantumblack.com @quantumblack

QuantumBlack

QuantumBlack, a McKinsey company, helps companies use data to drive decisions. We combine business experience, expertise in large-scale data analysis and visualisation, and advanced software engineering know-how to deliver results.

QuantumBlack

Written by

An advanced analytics firm operating at the intersection of strategy, technology and design. www.quantumblack.com @quantumblack

QuantumBlack

QuantumBlack, a McKinsey company, helps companies use data to drive decisions. We combine business experience, expertise in large-scale data analysis and visualisation, and advanced software engineering know-how to deliver results.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch

Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore

Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store