The ‘right to an explanation’ under EU data protection law
EU data protection restricts SOLELY automated individual decision-making (making a decision automatically without any human involvement) AND PROFILING (automated processing of personal data to evaluate certain things about an individual). Profiling can be part of an automated decision-making process.
Additional requirements apply to solely automated decision-making that has legal or similarly significant effects.
(1) Controllers can only carry out this type of decision-making where the decision is: (i) necessary for the entry into or performance of a contract; or (ii) authorized by Union or Member State law applicable to the controller; or (iii) based on the individual’s explicit consent.
(2) Controllers must : (i) give individuals information about the processing; (ii) provide simple ways for them to request human intervention or challenge a decision; (iii) carry out regular checks to make sure that automated systems are working as intended.
There are additional restrictions on using special category and children’s personal data.
What rights are related to automated decision making and why are they important?
Automated individual decision-making and profiling can lead to quicker and more consistent decisions when used responsibly. There are significant risks to individuals if used irresponsibly, however. The General Data Protection Regulation (GDPR) includes provisions specifically designed to address these risks.
Article 22 of GDPR specifies:
Article 22: “Automated individual decision-making, including profiling”
1. The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.
2. Paragraph 1 shall not apply if the decision:
(a) is necessary for entering into, or performance of, a contract between the data subject and a data controller;
(b) is authorized by Union or Member State law to which the controller is subject and which also lays down suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests; or
(c) is based on the data subject’s explicit consent.
3. In the cases referred to in points (a) and (c) of paragraph 2, the data controller shall implement suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests, at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision.
4. Decisions referred to in paragraph 2 shall not be based on special categories of personal data referred to in Article 9(1), unless point (a) or (g) of Article 9(2) applies and suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests are in place.
Automated individual decision-making is a decision made by automated means without any human involvement. Examples include:
- an online decision to award a loan; and
- a recruitment aptitude test using per-programmed algorithms and criteria.
Automated individual decision-making does not have to involve profiling, although it often will.
Profiling is “any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyze or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behavior, location or movements.” (see Article 4(4) of GDPR).
- Organizations obtain personal information about data subjects from a variety of different sources. Internet searches, buying habits, lifestyle and behavior data gathered from mobile phones, social networks, video surveillance systems and the Internet of Things are examples of the types of data sources organizations might collect from.
- Information is analyzed to classify people into different groups or sectors, using algorithms and machine-learning. This analysis identifies links between different behaviors and characteristics to create profiles for individuals.
Based on the traits of others who appear similar, organizations use profiling to:
- discover individuals’ preferences;
- predict their behavior; and/or
- make decisions about them.
This can be very useful for organizations and individuals in many sectors, including healthcare, education, financial services and marketing.
GDPR restricts controllers from making solely automated decisions, including those based on profiling, that have a legal or similarly significant effect on individuals (see Article 22(1) of GDPR).
- “Solely automated”: There must be no human involvement in the decision-making process.
- Seriously negative impact: “Legal or similarly significant effects” is not defined in the GDPR, but the decision must have a serious negative impact on an individual.
- A ‘legal effect’ is something that adversely affects an individual’s legal rights.
- ‘Similarly significant effects’ are more difficult to define but would include, for example, automatic refusal of an online credit application and e-recruiting practices without human intervention.
Solely automated individual decision-making — including profiling — with legal or similarly significant effects is restricted, although this restriction can be lifted in certain circumstances. Controllers can only carry out solely automated decision-making with legal or similarly significant effects if the decision is:
- necessary for entering into or performance of a contract between an organization and the individual;
- authorized by law (for example, for the purposes of fraud or tax evasion); or based on the individual’s explicit consent.
A controller using special category personal data can only carry out processing described in Article 22(1) if:
- It has the individual’s explicit consent; or
- the processing is necessary for reasons of substantial public interest.
Because this type of processing is considered high-risk, a Data Protection Impact Assessment (DPIA) must be carried out to demonstrate that the controller identified and assessed the risks and how to address them.
The GDPR also:
- requires controllers to give individuals specific information about the processing;
- obliges controllers to take steps to prevent errors, bias and discrimination; and
gives individuals rights to challenge and request a review of the decision.
These provisions are designed to increase individuals’ understanding about how automated processing systems make decisions that affect them.
- provide meaningful information about the logic involved in the decision-making process, as well as the significance and the envisaged consequences for the individual;
- use appropriate mathematical or statistical procedures;
- ensure that individuals can: (i) obtain human intervention; (ii) express their point of view; and (iii) obtain an explanation of the decision and challenge it;
- establish appropriate technical and organizational measures to correct inaccuracies and minimize the risk of errors;
- secure personal data, using means proportionate to the interests and rights of the individual, that also prevents discriminatory effects.
Article 22 applies to solely automated individual decision-making, including profiling, with legal or similarly significant effects. If the processing does not match this definition it is not restricted by article 22, however controllers must still:
- comply with EU data protection law principles;
- identify and record the lawful basis for the processing;
- have processes in place so data subjects can exercise their rights.
Individuals have a right to object to profiling in certain circumstances and controllers must bring details of this right specifically to their attention.
The right to object to automated decision making can be restricted under Member State law in certain circumstances (see, the section on “Restrictions on the rights of individuals” here)
For all automated decision making and profiling
For solely automated individual decision-making, including profiling with legal or similarly significant effects (Article 22)
- Studying the behavior of AI / A new paper frames the emerging interdisciplinary field of machine behavior (Article by MIT Labs for Medium)
- How Artificial Intelligence Works (Briefing for EU Parliament)
- What is AI: What Exactly is Artificial Intelligence and Why is it Driving me Crazy by William Vorhies
- Can AI Be a Fair Judge in Court? Estonia Thinks So — Eric Niiler for Wired
- AI Ethics: Seven Traps by Annette Zimmermann Leave a Comment for Freedom to Tinker.
Counterfactual explanations without opening the black box: automated decisions and the GDPR by Sandra Wachter, Brent Mittelstadt, and Chris Russel (2018) ( There has been much discussion of the “right to explanation” in the EU General Data Protection Regulation, and its existence, merits, and disadvantages. Implementing a right to explanation that opens the ‘black box’ of algorithmic decision-making faces major legal and technical barriers. Explaining the functionality of complex algorithmic decisionmaking systems and their rationale in specific cases is a technically challenging problem. Some explanations may offer little meaningful information…)
The Ethical Machine: The Ethical Machine is an anthology of essays on the ethics of artificial intelligence, bias, and what they mean for the future of technology and society.
From dignity to security protocols: a scientometric analysis of digital ethics by René Mahieu, Nees Jan van Eck, David van Putten, and Jeroen van den Hoven
Accountable Algorithms JOSHUA A. KROLL, JOANNA HUEY, SOLON B AROCAS, EDWARD W. FELTEN, JOEL R. REIDENBERG, DAVID G. ROBINSON & HARLAN YU [V165:633 Pennsylvania law review] V.2
Relevant provisions in the GDPR — Article 4(4), 9, 12, 13, 14, 15, 21, 22, 35(1)and (3).
Guidelines from regulators
European Data Protection Board (EDPB):
- The EDPB endorsed WP29’s adopted guidelines on Automated individual decision-making and Profiling: Guidelines on automated individual decision-making and profiling for the purposes of Regulation 2016/679 (WP251)
European Data Protection Supervisor (EDPS)
- EDPS Opinion on coherent enforcement of fundamental rights in the age of big data (2006)
- The Cartoon introduction to data ethics (20818)
- Royal Society (2018): Governing artificial intelligence: ethical, legal, and technical opportunities and challenges
- Professor Dame Wendy Hall and Jérôme Pesenti report on Growing the Artificial Intelligence Industry in the UK
- Big data, artificial intelligence, machine learning and data protection / UK / Information Commissioner Office (ICO)
- European Union Agency for Fundamental Rights’ report on “Getting the Future Right: #ArtificialIntelligence and Fundamental Rights”, which marks a policy shift from ethics-based AI governance to fundamental-rights-based AI governance. 2020
- Council of Europe (CoE): Report on AI 2018
- The CAHAI Secretariat published the report “Towards Regulation of AI Systems: Global perspectives on the development of a legal framework on Artificial Intelligence systems based on the Council of Europe’s standards on human rights, democracy and the rule of law” 2020
- Preparing for the future of artificial intelligence (White House Report — 2006)
- Report Summary: FTC on Big Data (2006) (summary of Big Data: A Tool for Inclusion or Exclusion? Understanding the Issues (FTC Report, 2016)
- Vermont Artificial Intelligence Task Force — FINAL REPORT — January 2020
- The Michael Dukakis Institute for Leadership and Innovation Center for AI and Digital Policies published a comprehensive report (including country specific reports) "ARTIFICIAL INTELLIGENCE and DEMOCRATIC VALUES: The AI Social Contract Index 2020"(AISCI-2020)" led by Marc Rotenberg
- The Future of Privacy Forum published The Spectrum of Artificial Intelligence — An Infographic Tool
This video is quite technical but useful in breaking down the issues of fairness and bias in algorithmic decision-making.
- IEEE, the international technical standards body, is working on benchmarks/standards for legal AI apps and machine learning tools.
- The Toronto Declaration: A 2018 declaration by AI professionals of their intent to protect the right to equality and non-discrimination in machine learning systems
- Ethical Accountability Framework for Hong Kong China by the Information Accountability Foundation and the Data Protection Commissioner of Hong Kong.
- 40th International Conference of Data Protection and Privacy Commissioners (ICDPPC -2018): Declaration on Ethics and Data Protection in Artificial Intelligence
From the New York Times: “The Secretive Company That May End Privacy As We Know It,” about a facial recognition app that can help a user identify strangers and has been embraced by law enforcement.
- Reaction: Cybersecurity and tech writer Joseph Steinberg has some tips on making yourself less recognizable to apps like Clearview.
- Twitter has sent a letter to Clearview telling the company to cease scraping photos and content from tweets, and delete any content it currently holds that was acquired from Twitter.
- Sen. Edward Markey (D-Mass) has sent a bunch of questions to Clearview’s CEO, including who or what entities are in its customer base, what access do employees have and compliance with the Children’s Online Privacy Protection Act (COPPA).
Worried About Privacy at Home? There’s an AI for That (on Edge AI) by Clive Thomson for Wire Jan. 2020.
How machine learning powers Facebook’s News Feed ranking algorithm (Facebook engineering page)
What is an algorithm? It depends on who you ask. For better accountability, we should shift the focus from the design of these systems to their impact. By Kristian Lum and Rumman Chowdhury for MIT Technology Review. February 26, 2021
NFTs and the Law: An “Explain it Like I’m Five” Overview by the Harris County Law Library. March 9, 2021
Markulla Center for Applied Ethics, An Ethical Toolkit for Engineering/Design Practice.
Microsoft Research-Carnegie Mellon, Co-Designing Checklists to Understand Organizational Challenges and Opportunities around Fairness in AI.