Machine Learning and the GDPR

Seda Ilik
DataBulls
Published in
5 min readMar 26, 2021

--

“Computers are able to see, hear and learn. Welcome to the future.”

ABSTRACT

Machine Learning (ML) is becoming increasingly important for our lives. Businesses and governments are using ML systems to support and make decisions that significantly impact everyday life. However, concerns have arisen because decisions based on ML models can be unfair and discriminatory. In legal literature, the ‘right to an explanation’ has emerged as an attractive remedy for the challenges raised by these concerns.

However, trying to find “meaningful information” from the computer logic has the danger of becoming a new transparency fallacy — an illusionary remedy — rather than something substantively helpful.

While individual remedies are the central focus of the General Data Protection Regulation (GDPR), other aspects of the GDPR involving data protection by design (DPbD) principles and data protection impact assessments (DPIAs) deserve more attention. DPbD and DPIAs are more likely to ensure a better algorithmic society as a whole if they apply the vision that forms the basis of the rationale behind the data protection.

ML is a growing part of our lives. In the last few years it has emerged stronger than ever before but with that growth have come concerns, mainly around unfairness, discrimination and opacity. The rise of ML caught society unprepared and this created a great need to change and adapt to these new technologies both for law and society. Much concern has been raised around the legal issues regarding the potential risks of ML systems to individuals’ fundamental rights and, in particular, their right to data protection. ML has made it easier to create profiles and make automated decisions with the potential to significantly impact individuals’ rights and freedoms.

Photo by Jacek Dylag on Unsplash

The GDPR introduces new provisions to address the risks arising from profiling and automated decision-making. Article 22 provides general prohibition on fully automated-decision making including profiling which creates legal or similarly significant effects. It grants data subjects the right not to be subject to solely automated processing including profiling. This right is excluded if the decision is necessary to perform or enter a contract, authorised by Member State law or based on explicit consent.

If the contract or explicit consent exceptions apply, specified safeguards must be put in place. Minimum safeguards are stated in Article 22(3) as the right to obtain human in the loop on the part of data controller, to express a view and to contest a decision reached by an automated process. Recital 71 mentions all safeguards that exist in the article 22 but also adds further “right to obtain an explanation of the decision” which is called by some as “right to an explanation”.

The right to an explanation emerged as an attractive remedy to battle with the opaqueness of ML systems. However, legal scholars’ views are divided over the right to an explanation — both as to the where this right originated, and as to the level of explanation that it grants. While some derive the right to an explanation from Articles 13–15 of the GDPR, others claim the right to an explanation is mentioned only in Recital 71 which is nonbinding, and so the right to an explanation does not exist in the GDPR. However, all legal scholars agree that Articles 13–15 grant the right to obtain ‘meaningful information about the logic involved’ of decisions to which Article 22(1) and 22(4) applies.

While Wachter et. al. name this as ‘right to be informed’, Selbst et. al. name this as right to an explanation. The discussion about the what ‘meaningful information’ should provide remains open. European Data Protection Board also notes that the explanation to be provided under Article 15 implies a more general form of oversight, ‘rather than a right to an explanation of a particular decision’.

I will not put the whole discussion on right to an explanation here but shortly suggest that the explanation that will be provided to data subject should at least include the reasons for a decision, the grounds to contest that decision and what could be changed to achieve a desired decision in the future. Despite all discussion about the right to an explanation, individuals’ desire to understand, contest and alter decisions does not change based on definitional issues. Therefore, the right should be interpreted in a way considering individuals’ general interests, the GDPR’s emphasis on individual protections and its background goals.

The protection enshrined within the GDPR’s right to an explanation may not provide transparency in the form of a complete individualized explanation. However, this lacuna may be filled by the requirements of the GPDR ensuring formal compliance with its provisions — the DPbD principles and DPIAs.

DPIAs serve as a DPbD safeguard and, under the GDPR, they become the required norm for ML systems. In contrast to remedies invoked on individual basis by data subjects, properly adopted DPIAs and DPbD methodologies can provide the evidence necessary to inform data subjects and construct the design and deployment of systems from the outset in a way that is more transparent, fair and accountable. However, in order to be applied successfully, these safeguards require that implementers start with an appreciation of the logic underlying the data protection regime and a willingness to implement that logic in a practical manner. It is to be hoped that, as time progresses, technology developers and implementers may adopt that approach and create a better, algorithm- and privacy-rich society.

Other Articles:

--

--