AI Has Some Explaining To Do

Adam Kendall
Shapes AI
Published in
6 min readNov 7, 2019

AI has seemingly made its way into every aspect of our daily lives, from recommending the next binge-worthy show on Netflix, to checking passports at the airport. In the past decade, AI has come a long way from beating world-champion chess players to taking on more complex and critical tasks, including diagnosing terminal illnesses and piloting autonomous cars.

Advances in machine learning — specifically deep learning — have been critical in getting us to this point. However, these systems have become so complex that even their creators are struggling to understand how they’re making decisions. This has popularised the term, ‘black-box AI’.

To say that this is problematic, especially when we’re expecting AI to make life-or-death decisions, would be a gross understatement.

How can we possibly trust AI with our lives when we have little insight into the rationale behind its decisions? The simple answer is, we cannot.

A prime example was that of IBM’s flagship AI program, Watson, which was tasked with giving recommendations on cancer treatments. Many doctors reported that its suggestions were way off the mark and could if followed, have severe or even fatal consequences for patients. These doctors, working alongside the machines, didn’t have visibility into why and how Watson was arriving at its recommendations. This lack of transparency not only eroded user trust in the system, but also made it difficult for IBM’s developers to identify and fix erroneous parts of the model as well as incorporate feedback. It has since been reported that more than half of the division’s staff have been laid off due to the lack of improvements and progress.

IBM Watson Health has been significantly downsized in recent years.

Cases like this, along with widely-reported racial bias in facial recognition algorithms by the likes of Microsoft and Amazon, are further undermining society’s trust in AI algorithms. In a 2017 PwC CEO survey, 67% of business leaders believed that AI and automation would impact negatively on stakeholder trust levels in their industry in the five years ahead. Furthermore, a study by Pega found that only 25% of consumers would trust a decision made by an AI system over one made by a human when qualifying them for a bank loan.

This lack of trust is a major roadblock for the development, mass adoption and growth of AI in its current form.

The rise of explainable and interpretable AI

But things could be about to change with the rise of Explainable AI (XAI). DARPA defines this as “new machine learning systems that have the ability to explain their rationale, characterize their strengths and weaknesses, and convey an understanding of how they will behave in the future.”

Explainable AI has the potential to transform the landscape through enabling:

  • Developers to build AI models that are less susceptible to bias, errors and adversarial attacks (attempts to confuse models by subtly changing the input data). It also enables them to better assess and enhance performance, including through specific new training data.
  • Users to understand the why behind key AI decisions and thus to be able to make meaningful challenges when appropriate. Users are more likely to trust an AI system they are using or working alongside if they have some understanding of its decision-making process.
  • Regulatory bodies and legal teams to disentangle algorithmic decision-making to help assign accountability and to audit systems.

The increased interest and scrutiny by government and regulators have made this a significant risk area for the corporate world. As part of GDPR, a new directive effectively creates a ‘right to explanation’ whereby a user can demand details of how an algorithmic decision about them was arrived at. “Computer says no”, reminiscent of the memorable sketch on satire comedy show Little Britain, is no longer a legally acceptable reason to automatically reject a couple’s mortgage application or remove a user’s content from a social media platform.

Little Britain sketch, GIF source

Tech giants such as Facebook, Google, Nvidia and IBM are investing heavily in an attempt to unlock the next wave of AI applications with Explainable AI (XAI). Interestingly, IBM who perhaps understand XAI’s importance as much as anyone given the earlier example, have recently released two developer tools that support greater visibility into the outcomes of machine learning models, called OpenScale and Explainability 360. Yet the private sector is not alone in this pursuit. DARPA has announced a $2 billion campaign called ‘AI Next’ to help machines acquire these human-like logic and reasoning capabilities. Moreover, Yoshua Bengio — who was a joint winner of the Turing Award for his contributions to the development of deep learning — believes AI won’t realise its full potential unless it can reason about causal relationships and begin understanding why things happen.

Yoshua Benagio, Turing award winner.

Introducing Deep Visual Reasoning

Here at London-based startup Shapes AI, we have developed a breakthrough computer vision technology called ‘Deep Visual Reasoning’ (DVR). Using our proprietary approach, we can reason over video streams to infer and identify specific behaviours, activities and events of interest. What’s more, the rationale behind the system’s predictions and classifications are visualised and communicated in a way that can be interpreted by humans.

Shapes AI exhibiting at CVPR 2019, Long Beach.

This technology is infrastructural. It can be harnessed for everything from understanding customer behaviour in retail stores to inferring sports strategies on the field. Shapes AI is initially targeting DVR to address critical problems where its human-interpretability element will prove invaluable.

One such area is that of social media video content moderation. Interpretable AI systems will help explain takedown decisions to the platform’s users and moderators. By being transparent, it allays concerns over built-in bias or lack of precision. In this way, it can help platform’s strengthen user trust, protect their brand reputation and make regulatory compliance easier.

Black-box AI has its place

Explainable AI is neither necessary or advantageous in every case. Its utility very much depends on the specific problems, context and use case. The choice is not simply between using XAI or conventional black-box AI. Rather, it is a choice about the level of interpretability required given the needs of developers, users and governing bodies.

For instance, it may be less critical to know the precise rationale behind that Netflix movie recommendation because if it turns out to be inaccurate, the outcome is completely trivial.

However, AI needs full explainability when applied to critical tasks like autonomous driving because riders need to trust that vehicles are taking all the necessary variables into account. The vehicle also needs to indicate when it doesn’t know what to do, and in the event of an accident, examinations of why algorithmic decisions were made will need to take place with full transparency and accountability. The future of autonomous mobility, at least in part, is dependent upon society trusting in the judgment of the AI.

Final thoughts

In conclusion, Explainable AI offers developers the visibility they need to build more robust machine learning systems. It also gives decisions by AI systems a level of integrity that can build trust in society, particularly for sensitive life-or-death high-stakes tasks they are now being depended on for. The amount we should invest in making our machine learning systems human-interpretable, therefore, should be correlated with the consequences that bad AI decision-making will have in a specific context. With the criticality of tasks that AI performs only set to increase in the coming years, many leading organisations and researchers are betting big on Explainable AI. It could hold the key to society’s ability to fully adopt and benefit from the rapid increase in productivity, safety and innovation that AI promises.

--

--