Kazimir Malevich, Black Suprematic Square, 1915, Moscow, Tretyakov Gallery

AI, How Do You Do It?

The “right to explanation” of machine learning systems: eXplainable AI (XAI)

Federico Bo
Dec 9, 2019 · 5 min read

A few days ago I read a New Scientist article about research conducted in the United States on the electrocardiograms of 400,000 people with the aim of predicting the probability of death at one year. Researchers input raw ECG data as input to an AI system; the results provided forecasts with an accuracy of around 85%, greater than that of other methods used by cardiologists. The AI ​​has also discovered cases escaped from the analysis of doctors. The problem is that even in retrospect the doctors themselves were unable to identify which ECG anomalies had allowed identifying these risky cases.

Then, if on the one hand the AI ​​can help “re-read” data, discovering patterns hidden from human eyes, on the other the problem arises of understanding the “reasoning” behind the identification of these patterns. Simple forms of machine learning use transparent algorithms, such as decision trees or Bayesan classifiers. But others, like neural networks, sacrifice transparency and comprehensibility for power, speed, and accuracy.

Today it is thought that the “black box” of a neural network and all AI systems should be made more transparent. Along with the output you should have an explanation of how these data were generated. The AI ​​should be able to explain its reasons for ensuring that human users can trust, recognizing the capabilities and limitations of the system.

Explainable AI (XAI) is a relatively new field of research that aims to develop techniques and methods to make the results obtained by Artificial Intelligence technologies understandable by humans.

Think of autonomous driving systems, AI applications used in medicine, in the financial, legal or military sectors. In these cases, it is easy to understand that to trust the decisions and the data obtained it is necessary to understand how the artificial partner has “reasoned”.

It is necessary to know why a choice was made instead of another and if the system is able to recognize an error of its own (perhaps remedying it). To know, ultimately, when we can completely trust AI decisions.

The theme is important for several reasons. Getting AI technologies into companies is easier if you can translate the steps taken by an algorithm into a human language to achieve a result.

In many activities, transparency can be a legal, tax or ethical obligation. The less a system appears “magical”, the more chances it will be adopted.

Another problem linked to the need to have “explainable” digital intelligence is linked to the delicate question of bias (of gender, race or other) that can infect them: the bad training data in machine learning is one of the causes of these side effects. As the last example of an increasingly long list, we can mention that of black patients discriminated against by decision-making systems in American hospitals. With the use of XAI techniques, these anomalies could be identified or prevented more easily.

Then there is the social right to transparency and explanation of the algorithms, explicitly recognized by the European Union in the General Data Protection Right (GDPR): “ The existence of an automated decision-making process, including the profiling referred to in Article 22, paragraphs 1 and 4, requires significant information on the logic used, as well as the importance and expected consequences of this treatment for the data subject. “

There are two considerations to make at this point.

First, it is important to know to whom the explanations are destined. Technical explanations can be requested for the experts, explanations of other kinds for the control authorities or, in general, simple explanations for all (which should be highlighted and adapted to the context).

Furthermore, in general, we should think about the different levels of transparency. The choices of a Netflix-style recommendation system can be proposed to the user without too many explanations. More explanations could require an IoT based home automation system. Loan systems need an even higher level of transparency. In the field of medical diagnosis, the level should be maximum.

Given the importance of the topic companies, organizations and researchers are studying it and proposing solutions.

DARPA, the US Department of Defense government agency, has opened a program dedicated to the XAI. Its aim is to arrive at “ new machine-learning systems will have the ability to explain their rationale, characterize their strengths and weaknesses, and convey an understanding of how they will behave in the future.”

Many large companies (including IBM, Accenture, and Microsoft) and ICT startups are experimenting and already promoting XAI tools and systems. Some experts are skeptical and argue that at the present time talking about XAI systems is just marketing, such as feeding a false sense of security in delicate contexts.

Researchers, for their part, are studying the subject in-depth, focusing on the various aspects and difficulties that arise in making the now opaque AI and ML systems transparent. They gathered in a global community, the FAT/ML, which organizes annual meetings on the state of the art of the emerging discipline.

In Italy, the Knowledge Discovery and Data Mining Laboratory (KDD Lab), a joint research group between the ISR of the CNR and the Computer Science Department of the University of Pisa, has just launched a five-year project funded by the European Commission.

The project consists of three research lines. The first want to create a “local-to-global” framework (from the special case to the general case) for the explanation of what happens in an AI black box, with algorithms able to optimize the quality and comprehensibility of this generalization process. The second intends to create models for causal explanations, investigating both the relationships between variables (internal and external) and decisions and models that capture the detailed behavior of data generation in deep learning networks. Finally, it is intended to create a system of evaluation of these and other XAI methods by the users as well as an ethical-legal context to conform the methods to legal standards such as the GDPR.

These and other studies around the world make the XAI a research field that is not only stimulating but essential; as Dino Predeschi of the KDD Lab explains “without a technology capable of explaining the logic of black boxes, however, the right to explanation is destined to remain a dead letter, or to outlaw many applications of opaque machine learning.”

Sources

Costabello L., Giannotti. F. et al. On Explainable AI:
From Theory to Motivation, Applications, and Limitations
(a series of slides that go into the technical detail of the XAI research)

Ron Schmelzer, Understanding Explainable AI

Mark Stefik, Explainable AI: An Overview of PARC’s COGLE Project with DARPA

Hubert Guillaud, De l’explicabilité des systèmes : les enjeux de l’explication des décisions automatisées

Jeremy Kahn, Artificial Intelligence Has Some Explaining to Do

Cade Metz, We Teach A.I. Systems Everything, Including Our Biases

Federico Bo

Written by

Computer engineer, tech-humanist hybrid. Blogger, crowdfunding expert. Interested in blockchain technologies and AI. Tech writer for the grove — Mangrovia.

The Startup

Medium's largest active publication, followed by +584K people. Follow to join our community.

More From Medium

More from The Startup

More from The Startup

Ebin John
Feb 7 · 8 min read

2.4K

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade