Explainability as a legal requirement for Artificial Intelligence

Gabriela Bar
WomeninAI
Published in
13 min readNov 27, 2020
Photo credit: Pixabay

Introduction

The White Paper on Artificial Intelligence (AI)[1] published by the European Commission (EC) in February 2020 indicated, inter alia, the need to decide, as part of the design of the future AI regulatory framework, on the types of legal obligations that should be imposed on entities involved in the design, production, placing in the market and use of autonomous systems. As part of the Ecosystem of Trust, the EC emphasised the importance of seven key requirements for AI systems identified and described by the High Level Expert Group (HLEG) in the Guidelines for Trustworthy AI[2], published in April 2019. In addition to requirements such as human agency and oversight, technical robustness and safety, privacy and data governance, diversity, non-discrimination and fairness, societal and environmental wellbeing and accountability, transparency is a key condition for AI to be trusted.

Consequently on 17/07/2020 the European Commission published the Checklist for Trustworthy Artificial Intelligence (ALTAI)[3] developed by HLEG, the purpose of which is to support the creators of AI systems and organisations implementing AI in identifying risk, assessing the reliability and compliance of the created AI systems with European values. One of the key features of an autonomous system that makes it trustworthy is its transparency, which includes three elements:

1) traceability

2) explainability

3) open communication about the limitations of the AI ​​system

The document highlights the role of explainability in autonomous systems, which are to be seen as in line with the European approach to AI development.

[1] European Commission, White Paper On Artificial Intelligence — A European approach to excellence and trust, Brussel, 19.2.2020, COM(2020) 65 final, https://ec.europa.eu/info/publications/white-paper-artificial-intelligence-european-approach-excellence-and-trust_en (7.8.2020).

[2] High-Level Expert Group on Artificial Intelligence, Ethics Guidelines for Trustworthy AI, 8.4.2019 r., https://ec.europa.eu/futurium/en/ai-alliance-consultation/guidelines#Top (7.8.2020).

[3] High-Level Expert Group on Artificial Intelligence, Assessment List for Trustworthy Artificial Intelligence for self-assessment, https://futurium.ec.europa.eu/en/european-ai-alliance/pages/altai-assessment-list-trustworthy-artificial-intelligence (7.8.2020).

Transparency and explainability

Leaving aside terminological issues (in various types of publications, including scientific articles, explainability is defined as an element of transparency, a criterion separate from it or as a term used interchangeably with transparency[1]), for the purposes of further considerations (and in accordance with the conceptual apparatus adopted by the European Commission, as well as by the IEEE in the Ethically Aligned Design project[2]), I assume that transparency is a broader concept, including also explainability, as one of the elements of a transparent AI system.

In this perspective, transparency should consist of at least two main elements:

1) access to reliable information about the operation of the AI model, including information about the training procedure, training data, machine learning algorithms, methods of testing and validating the AI system

2) access to a reliable explanation calibrated for different audiences (from an ordinary citizen to an expert), covering both the technical processes of the AI system and the rationale for decisions or predictions made by the AI system (as a basis for the right to appeal against an automatically made decision)

This understanding of transparency should apply to all stages of AI implementation, starting from the design stage, through development, personalisation (adaptation to the needs of the user — buyer), operation, to periodic validation of the effectiveness of systems using Artificial Intelligence, which should include both regular tests and periodic external audits.

The purpose and scope of legal requirements for AI in the context of transparency may differ depending on the profile of recipients, understood as both the operator of the AI system and its end user. Therefore, one should take into account the possibility of differentiating the levels of transparency and the available explanation, depending on whether a given recipient is classified as one of the stakeholder groups:

1) field experts who are AI users (e.g. lawyers, doctors, environmentalists);

2) executive and supervisory bodies, regulators, and certification bodies;

3) end-users affected by decisions made by the AI (e.g. litigation parties and participants, patients, job applicants);

4) Data Scientists, developers, AI designers;

5) managerial staff of entities using AI systems (e.g. in a bank or insurance company);

6) privacy specialists and ethicists.

Determining different levels of transparency for different groups of recipients should be preceded by an assessment of the risk posed by a given AI system, understood as its impact both on society as a whole and on individuals (not only natural persons, but also various types of organisations and enterprises). The impact assessment should also answer the question “what is the acceptable level of autonomy in an AI system and what is the minimum level of explanation of its decisions or predictions?”.

It is worth mentioning here that the risk-based approach was recommended as appropriate in relation to AI systems by the EC, which in the White Paper indicated that such an approach is important to ensure proportionality of regulatory intervention. Different levels of risk will therefore require different levels of transparency: AI used, for example, in pre-trial detention in criminal cases carries a significantly higher risk than an AI system that supports online small claims dispute resolution.

Therefore, it seems appropriate to consider — at least with regard to a certain group of AI systems, e.g. those that will be classified as high-risk AI — introducing certain legal requirements in the scope of ensuring transparency, in particular clarification of decisions made in an automated or semi-automated way.

[1] See Information Commissioner’s Office (ICO), Guidance on AI and data protection, https://ico.org.uk/for-organisations/guide-to-data-protection/key-data-protection-themes/guidance-on-ai-and-data-protection/ (11.8.2020); The Alan Turing Institute, Understanding artificial intelligence ethics and safety — A guide for the responsible design and implementation of AI systems in the public sector, https://www.turing.ac.uk/research/publications/understanding-artificial-intelligence-ethics-and-safety (11.8.2020); W. Guo, Explainable Artificial Intelligence (XAI) for 6G: Improving Trust between Human and Machine, https://www.researchgate.net (7.8.2020); M.E. Kaminski, The right to explanation, explained, https://www.researchgate.net (15.7.2020); F. Doshi-Velez, M. Kortz, Accountability of AI Under the Law: The Role of Explanation, https://arxiv.org/abs/1711.01134 (6.8.2020); G. Sileno, A. Boer, T. van Engers, The Role of Normware in Trusthworthy and Explainable AI, https://arxiv.org/abs/1812.02471 (17.7.2020); S. Spreeuwenberg, AIX: Artificial Intelligence needs explanation: Why and how transparency increases the success of AI solutions, Amsterdam 2019.

[2] See https://ethicsinaction.ieee.org (12.8.2020).

Access to information

Without access to information about the system and the data used by it, it is impossible to build trust in AI, and in the longer term to use AI more widely in business processes, including predictive processes and making automated or partially automated decisions. Therefore legal obligations regarding documentation should be imposed on designers, producers and operators of autonomous systems including:

1) methods used in the design and creation of AI;

2) data management policies and procedures, taking into account the requirements of data availability, currency, integrity and security;

3) applied standards (technical, legal, ethical) and granted certificates (it should be assumed here that different standards will apply to different systems i.e. the predictive algorithm determining the likelihood of returning to crime should undergo a mandatory compliance assessment, while the Netflix algorithms suggesting another movie worth a look will probably never be subject to certification);

4) a management model that assigns responsibilities within the organization that uses AI;

5) methods used to test and validate the AI system.

A necessary requirement to ensure the transparency and explainability of the system is also the ability to identify — at every stage of using the AI — data sets (including the methods of their collection, marking, grouping, etc.) and the processes used for their processing, including machine learning algorithms. In the context of explainability it is particularly important to record and archive the decisions of the AI system together with all the data and algorithms used, so as to ensure the possibility — even after some time (within the set retention period) — of analysing the information that led to the decision.

Communication is an important element of transparency highlighted by HLEG in ALTAI. The system should inform the user from the very beginning that she is interacting with Artificial Intelligence. It requires consideration of whether the user should each time have the right to opt out of communicating with AI and request human contact. Perhaps the law should provide for situations, for example in out-of-court small claims dispute resolution, where the request for human participation will not be met. Communication should undoubtedly also include information such as the level of autonomy of the AI system — the user should know from the very beginning whether the AI ​​is only a source of information for the system operator, has a supporting function, or is it a fully autonomous system, functioning without human intervention.

Notwithstanding the above, where there are personal data in an AI system there should always be a procedure for the data subject’s access to her data and the exercise of her rights (Articles 12–22 GDPR), possibly including any limitations that could be appear on the basis of special regulations.

Photo credit: Pixabay

Explainable Artificial Intelligence

The AI explainability is about providing the user with the information they need to understand why an autonomous system behaves in a certain way under certain circumstances (or would behave under hypothetical circumstances). It seems however, that the demand that all possible AI systems meet the requirement of full explainability would be excessive, therefore — when defining the legally required level of explainability — it would be necessary to define simultaneously:

1) categories of stakeholders to whom certain information should be disclosed;

2) categories of information that may be disclosed to specific stakeholders;

3) circumstances in which such disclosure would be required, together with specification of the level of detail of the disclosed information.

Not all users can understand how raw data and code translate into benefits or harms that may apply to them individually, and not all users need this information. In my opinion, the right to an explanation of the AI ​​decision does not necessarily mean that the “black box” will be opened, i.e. the end user does not need to know how the algorithm works (the system can be so complicated that a layperson may not even understand), but she should necessarily get such feedback that will allow her to understand the decision, change her behavior (in order to obtain a different decision) or appeal against it if separate provisions provide for the right to appeal.

For people with an appropriate level of expert knowledge, information on the technical details of AI operation, including machine learning methods and the construction of neural networks, may be available, under certain conditions. However, the key here is to define the categories of information that may be disclosed and the categories of experts who may have access to it, because the issue of protecting the intellectual property rights of AI authors or producers cannot be ignored. Trade-offs commensurate with the results of the risk assessment and the impact of AI on human life are essential. However, an appropriately limited scope of disclosed information must be sufficient each time to determine whether the AI meets the standards adopted in legal regulations or the compliance markings held, in particular in terms of reliability and security.

An important point to note when attempting to construct standards for explainability is that end users of an AI system may not perceive the probability of a particular result in the same way that an AI predicts it. Given that people are often guided by intuition and previous experiences as well as prejudices, their (more or less rational) expectations may differ significantly from the outcome of machine learning based “reasoning” in deep neural networks. Hence, it is not without reason that both the business guidelines[1] and the literature[2] indicate that the explanation that should be expected from an AI should include:

1) explanation of the decision-making process, i.e. indication of the reasons that led to the decision in a specific manner, provided in an accessible and non-technical manner;

2) clarification of responsibility, i.e. who was involved in the development, implementation, management and operation of the AI ​​system;

3) explanation of the data, i.e. what data was used in the decision and how, what data was used for training and testing the AI system, and whether the data is reliable (i.e. that sufficient unbiased data is used;

4) safety explanation, that is, evidence of the accuracy, reliability, security and resilience of the AI ​​system;

5) explanation of the impact that the use of the AI ​​system and its decisions have or may have on an individual or, more broadly, on a specific social group;

6) justification of the result, understood to mean that it is important not only to explain why a certain decision was made, but also to justify that the result of the AI operation is objective and fair[3].

Bearing this in mind, one may be tempted to formulate conditions, the fulfilment of which would allow the conclusion that a given AI is adequately explainable:

1) the presented explanation should be consistent with the existing general knowledge and values. The most likely explanation is not always the best one, in particular probability-based statistical generalisations are viewed as unconvincing[4];

2) The AI should present an alternative result showing how the result obtained (in the form of prediction or decision) differs from other potential outcomes;

3) the explanation is provided in a timely manner — preferably in real time, but it is also permissible to provide explanations within a reasonable time after the decision is issued, provided that this enables the person to whom the decision relates to exercise her rights, e.g. the right to appeal against the AI decision;

4) the explanation presents the context in which the AI ​​operates, including, inter alia, information on the data used to train the model;

5) the explanation indicates when a given case is an “outlier” and is very different from the data used to train the model (this will identify a situation where the AI system is wrong and requires human intervention);

6) in the event of insufficient knowledge or data, the AI system should produce a notification of this fact.

The presentation of the explanation is no less important than the content of the explanation itself. The explanation should be presented in a comprehensible way: in written or visual form, adapted to the level of knowledge of the stakeholder concerned. The simplest form of presentation of the justification is its visualisation, highlighting the relationship between the input and output data. A more advanced approach is hypothesis testing, where a well-formed argument is tested against the input data and the output decision. However, it seems that for a non-expert user, the best way to present an explanation would be to use natural language — through both verbal and written communication — and identify which data features and algorithmic functions led to the decision. However, such a solution is probably the most technically complicated form of explanation[5].

[1] Information Commissioner’s Office (ICO) and The Alan Turing Institute, Explaining decisions made by AI, 20 May 2020, https://ico.org.uk/for-organisations/guide-to-data-protection/key-data-protection-themes/explaining-decisions-made-with-ai/ (11.8.2020).

[2] S. Spreeuwenberg, op. cit., p. 58–59.

[3] Information Commissioner’s Office (ICO) and The Alan Turing Institute, op. cit., p. 4, 27.

[4] S. Spreeuwenberg, op. cit., p. 58.

[5] Cf. U. Bhatt, A. Xiang, S. Sharma, A. Weller, A. Taly, Y. Jia, J. Ghosh, R. Puri, J. M.F. Moura, P. Eckersley, Explainable Machine Learning Deployment, https://arxiv.org/abs/1909.06342 (6.8.2020); A. Arrieta, N. Diaz-Rodrigues, J. Del Ser, A. Bennetot, S. Tabik, A. Barbado, S. Garciag, S. Gil-Lopez, D. Molina, R. Benjamins, R. Chatila, F. Herrera, Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI, https://arxiv.org/abs/1910.10045 (12.8.2020).

Explainability by design or post-hoc

As a rule the AI system should be explainable by design, i.e. provide explanations about its functioning in parallel to the actions and decisions taken. However, it seems that AI systems with a post-hoc explanation should be considered acceptable. Such systems however, would have to show techniques for transforming an uninterpretable model into an explainable model, in addition to explaining the decision itself.[1].

It is worth noting that the model in which the AI system (or its decision) can be explained after the decision has been made does not have to be equipped with the feature of explainability — algorithms dedicated to explaining the decisions of other AI systems are already being developed, covering both the explanation of the model (or only the general principles of its operation as needed) and an explanation of the result[2].

Worth considering is the concept of presenting the explanation in the document developed by ICO and the Alan Turing Institute entitled “Explaining decisions made with AI.” According to it, the explanation — like the information obligation under Art. 13 and 14 of the GDPR — may be presented in layers. This helps to avoid an overload of information that may be irrelevant or incomprehensible to the user[3].

Probably in most cases it is not necessary to fully explain how the AI works. It will be crucial in the case of decisions having a significant impact on human life, i.e. probably in relation to AI which the White Paper describes as high-risk AI (e.g. healthcare, judiciary, transport, energy)[4].

[1] Por. B. Khaleghi, The How of Explainable AI: Post-modelling Explainability, https://towardsdatascience.com/the-how-of-explainable-ai-post-modelling-explainability-8b4cbc7adf5f (12.8.2020).

[2] Cf. A. Mojsilovic, Introducing AI Explainability 360, https://www.ibm.com/blogs/research/2019/08/ai-explainability-360/ (12.8.2020).

[3] Information Commissioner’s Office (ICO) and The Alan Turing Institute, op. cit., p. 82, 100.

[4] European Commission, op. cit., p. 17.

Photo credit: Pixabay

Summary

In view of the importance of explainability it seems reasonable that future legal regulations, at least within the EU, should include specific obligations related to the implementation of the mechanisms described above. They would have a much wider application than Art. 22 GDPR, which guarantees data subjects the right not to be subject to decisions based solely on automated processing, including profiling, when it produces legal effects for these persons or similarly significantly affects them. The transparency of AI operations and the explainability of its decisions should apply — to a varying degree — not only to decisions based solely on the automated processing of personal data, but to all AI activities and applications. This would undoubtedly contribute to an increase in public confidence in AI, but also to the proper allocation of responsibility for the operation of autonomous systems.

About the Author

Dr. Gabriela Bar is a Managing Partner at Szostek_Bar and Partners Law Firm (www.szostek-bar.pl), attorney at law, Ph. D. in law, graduate of the Faculty of Law at the University of Wrocław; Member of IEEE Law Committee, member of New Technology Law Association, member of Women in AI and AI4EU; research associate in the Centre for Legal Problems of Technical Issues and New Technologies (Opole University, Poland) — member of SHOP4FC project — Industry 4.0 (Digital Manufacturing Platforms for Connected Smart Factories); lecturer. She is also an experienced expert in the law of new technologies: electronic contracts, e-commerce, data protection and Artificial Intelligence.

--

--

Gabriela Bar
WomeninAI

Lawyer, founder of Gabriela Bar Law & AI, expert in AI law and ethics, member of Women in AI. Author, speaker. Independent AI ethics expert in the EXTRA-BRAIN.