The ‘right to an explanation’ under EU data protection law

Golden Data Law
Golden Data
Published in
9 min readFeb 22, 2019
Science Hack Day Berlin — #SHD17 impressions — Forum hosted from 17. to 19. of November 2017 at the MAR building of TU Berlin.

Key points:

EU data protection restricts SOLELY automated individual decision-making (making a decision automatically without any human involvement) AND PROFILING (automated processing of personal data to evaluate certain things about an individual). Profiling can be part of an automated decision-making process.

Additional requirements apply to solely automated decision-making that has legal or similarly significant effects.

(1) Controllers can only carry out this type of decision-making where the decision is: (i) necessary for the entry into or performance of a contract; or (ii) authorized by Union or Member State law applicable to the controller; or (iii) based on the individual’s explicit consent.

(2) Controllers must : (i) give individuals information about the processing; (ii) provide simple ways for them to request human intervention or challenge a decision; (iii) carry out regular checks to make sure that automated systems are working as intended.

There are additional restrictions on using special category and children’s personal data.

What rights are related to automated decision making and why are they important?

Automated individual decision-making and profiling can lead to quicker and more consistent decisions when used responsibly. There are significant risks to individuals if used irresponsibly, however. The General Data Protection Regulation (GDPR) includes provisions specifically designed to address these risks.

Article 22 of GDPR specifies:

Article 22: “Automated individual decision-making, including profiling”

1. The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.

2. Paragraph 1 shall not apply if the decision:

(a) is necessary for entering into, or performance of, a contract between the data subject and a data controller;

(b) is authorized by Union or Member State law to which the controller is subject and which also lays down suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests; or

(c) is based on the data subject’s explicit consent.

3. In the cases referred to in points (a) and (c) of paragraph 2, the data controller shall implement suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests, at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision.

4. Decisions referred to in paragraph 2 shall not be based on special categories of personal data referred to in Article 9(1), unless point (a) or (g) of Article 9(2) applies and suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests are in place.

Automated individual decision-making is a decision made by automated means without any human involvement. Examples include:

  • an online decision to award a loan; and
  • a recruitment aptitude test using per-programmed algorithms and criteria.

Automated individual decision-making does not have to involve profiling, although it often will.

Profiling is “any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyze or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behavior, location or movements.” (see Article 4(4) of GDPR).

  • Organizations obtain personal information about data subjects from a variety of different sources. Internet searches, buying habits, lifestyle and behavior data gathered from mobile phones, social networks, video surveillance systems and the Internet of Things are examples of the types of data sources organizations might collect from.
  • Information is analyzed to classify people into different groups or sectors, using algorithms and machine-learning. This analysis identifies links between different behaviors and characteristics to create profiles for individuals.

Based on the traits of others who appear similar, organizations use profiling to:

  • discover individuals’ preferences;
  • predict their behavior; and/or
  • make decisions about them.

This can be very useful for organizations and individuals in many sectors, including healthcare, education, financial services and marketing.

GDPR restricts controllers from making solely automated decisions, including those based on profiling, that have a legal or similarly significant effect on individuals (see Article 22(1) of GDPR).

  • “Solely automated”: There must be no human involvement in the decision-making process.
  • Seriously negative impact: “Legal or similarly significant effects” is not defined in the GDPR, but the decision must have a serious negative impact on an individual.
  • A ‘legal effect’ is something that adversely affects an individual’s legal rights.
  • ‘Similarly significant effects’ are more difficult to define but would include, for example, automatic refusal of an online credit application and e-recruiting practices without human intervention.

Solely automated individual decision-making — including profiling — with legal or similarly significant effects is restricted, although this restriction can be lifted in certain circumstances. Controllers can only carry out solely automated decision-making with legal or similarly significant effects if the decision is:

  • necessary for entering into or performance of a contract between an organization and the individual;
  • authorized by law (for example, for the purposes of fraud or tax evasion); or based on the individual’s explicit consent.

A controller using special category personal data can only carry out processing described in Article 22(1) if:

  • It has the individual’s explicit consent; or
  • the processing is necessary for reasons of substantial public interest.

Because this type of processing is considered high-risk, a Data Protection Impact Assessment (DPIA) must be carried out to demonstrate that the controller identified and assessed the risks and how to address them.

The GDPR also:

  • requires controllers to give individuals specific information about the processing;
  • obliges controllers to take steps to prevent errors, bias and discrimination; and
    gives individuals rights to challenge and request a review of the decision.

These provisions are designed to increase individuals’ understanding about how automated processing systems make decisions that affect them.

Controllers must:

  • provide meaningful information about the logic involved in the decision-making process, as well as the significance and the envisaged consequences for the individual;
  • use appropriate mathematical or statistical procedures;
  • ensure that individuals can: (i) obtain human intervention; (ii) express their point of view; and (iii) obtain an explanation of the decision and challenge it;
  • establish appropriate technical and organizational measures to correct inaccuracies and minimize the risk of errors;
  • secure personal data, using means proportionate to the interests and rights of the individual, that also prevents discriminatory effects.

Article 22 applies to solely automated individual decision-making, including profiling, with legal or similarly significant effects. If the processing does not match this definition it is not restricted by article 22, however controllers must still:

Individuals have a right to object to profiling in certain circumstances and controllers must bring details of this right specifically to their attention.

The right to object to automated decision making can be restricted under Member State law in certain circumstances (see, the section on “Restrictions on the rights of individuals” here)

ICO Checklist

For all automated decision making and profiling

Available at the UK Information Commissioner’s Office site at https://ico.org.uk/

For solely automated individual decision-making, including profiling with legal or similarly significant effects (Article 22)

Available at the UK Information Commissioner’s Office site at https://ico.org.uk/

Additional Resources

Articles

Books

Ethically Aligned Design, First Edition (downloadable Book) by the Institute of Electrical and Electronics Engineers (IEEE).

Papers

Counterfactual explanations without opening the black box: automated decisions and the GDPR by Sandra Wachter, Brent Mittelstadt, and Chris Russel (2018) ( There has been much discussion of the “right to explanation” in the EU General Data Protection Regulation, and its existence, merits, and disadvantages. Implementing a right to explanation that opens the ‘black box’ of algorithmic decision-making faces major legal and technical barriers. Explaining the functionality of complex algorithmic decisionmaking systems and their rationale in specific cases is a technically challenging problem. Some explanations may offer little meaningful information…)

The Ethical Machine: The Ethical Machine is an anthology of essays on the ethics of artificial intelligence, bias, and what they mean for the future of technology and society.

From dignity to security protocols: a scientometric analysis of digital ethics by René Mahieu, Nees Jan van Eck, David van Putten, and Jeroen van den Hoven

Accountable Algorithms JOSHUA A. KROLL, JOANNA HUEY, SOLON B AROCAS, EDWARD W. FELTEN, JOEL R. REIDENBERG, DAVID G. ROBINSON & HARLAN YU [V165:633 Pennsylvania law review] V.2

Access to Algorithms SSRN Hannah Bloch-Wehba / Drexel University Thomas R. Kline School of Law; Yale University — Yale Information Society Project

RESOURCES EU

Legal citations

Relevant provisions in the GDPR — Article 4(4), 9, 12, 13, 14, 15, 21, 22, 35(1)and (3).

Guidelines from regulators

European Data Protection Board (EDPB):

European Data Protection Supervisor (EDPS)

UK

CoE

RESOURCES US

Reports

Other

This video is quite technical but useful in breaking down the issues of fairness and bias in algorithmic decision-making.

From the New York Times: “The Secretive Company That May End Privacy As We Know It,” about a facial recognition app that can help a user identify strangers and has been embraced by law enforcement.

Worried About Privacy at Home? There’s an AI for That (on Edge AI) by Clive Thomson for Wire Jan. 2020.

How machine learning powers Facebook’s News Feed ranking algorithm (Facebook engineering page)

What is an algorithm? It depends on who you ask. For better accountability, we should shift the focus from the design of these systems to their impact. By Kristian Lum and Rumman Chowdhury for MIT Technology Review. February 26, 2021

Can Auditing Eliminate Bias fromAlgorithms? By Alfred Ng for the Markup. February 23, 2021 08:00 ET

NFTs and the Law: An “Explain it Like I’m Five” Overview by the Harris County Law Library. March 9, 2021

OSU Program on Data and Governance Final Report on Business Data Ethics

OSU Program on Data and Governance recordings of Webinar series on Business Data Ethics

Markulla Center for Applied Ethics, An Ethical Toolkit for Engineering/Design Practice.

Microsoft Research-Carnegie Mellon, Co-Designing Checklists to Understand Organizational Challenges and Opportunities around Fairness in AI.

Proposed EU AI Regulation.

--

--

Golden Data Law
Golden Data

Golden Data Law is a mission driven benefit corporation that provides legal services to the not-for-profit community and to governmental agencies.