Beyond Black Box AI: Explainable AI

How humans can trust artificial intelligence

Dr. Louise Rix
Nanotrends
3 min readSep 9, 2020

--

“We’re being scored by formulas that we don’t understand, that often don’t have systems of appeal… What if the algorithms are wrong?” Cathy O’Neil, author of Weapons of Math Destruction, a quote from her TED talk.

By 2030, AI is expected to raise the global GDP by an estimated $15.7 trillion. In line with this, the number of organisations adopting AI is growing rapidly. However, there are still concerns regarding the trustworthiness of AI recommendations and their potential for bias.

This has been intensified by the increased use of more complicated AI systems, such as deep learning. These algorithms often sacrifice transparency and explainability for power, performance, and accuracy.

At the same time, AI technologies are being deployed in highly-sensitive areas such as facial recognition in policing or predicting reoffense rates in the criminal justice system. The South Wales police force was recently deemed to be unlawful in their use of facial recognition technology in part because they had not taken reasonable steps to find out if the software had a racial or gender bias.

While the potential benefit of AI systems is vast, if used incorrectly they can be harmful both to individuals and society. Being able to explain why an algorithm has come to a specific outcome is an essential part of this. The House of Lords AI Committee stated that “the development of intelligible AI systems is a fundamental necessity if AI is to become an integral and trusted tool in our society”.

The European GDPR and the UK’s Data Protection Act have increased the scrutiny of automated systems with no human in the loop. It is now a requirement that companies share ‘meaningful information about the logic involved’ in a decision to individuals when their data is processed in this manner if it produces a ‘legal’ (e.g serious) effect.

The Opportunity: Explainable AI (XAI)

Financial services is an obvious area where AI systems can have a profound impact on people’s lives and where explainability is therefore necessary. Credit scoring is a use case for explainability where the implications of, for instance, not receiving a loan are impactful to the user. Experian and Equifax have come under legal scrutiny due to reported non-compliance to the right to meaningful information.

Other areas within finance include fraud prevention and detection, insurance, or market forecasting.

There are different approaches to solve this problem. The first is to choose AI systems that are inherently more explainable. This is unappealing if it sacrifices performance. Another approach is interrogating existing AI systems as shown in the diagram below:

Image credit: Open Data Science, opendatascience.com

Companies in this space have taken different approaches. Zest AI, DreamQuark, Flowcast.ai and Temenos offer explainability as part of their core product offering to financial institutions.

Other companies such as Fiddler.ai provide ‘explainability-as-a-service’ through API access and Darwin AI provides an ‘explainability toolset’ to organisations using deep learning.

We’re keen to speak to any company addressing the need for explainable AI systems in finance and other critical areas such as health.

--

--

Dr. Louise Rix
Nanotrends

Female Health, Product, ex-Chief Medical Officer at Béa Fertility, Founder, VC. 🧠 Writing about health tech and female health louiserix.com