The A-Levels and IB Algorithms Fiascos Show Why Data Protection Should be Regulated Differently Than Privacy

Gabriela Zanfir-Fortuna
The Startup
Published in
4 min readOct 13, 2020
Image by Jae Rue from Pixabay

Two scandals involving algorithmic-based final scores for high-school seniors put automated decision-making based on personal data in the spotlight this summer, abruptly revealing to the large public the serious dangers of a society sorted by algorithms. The two cases concern the International Baccalaureate (IB) program and the finals of high-school graduates in the UK (‘A-levels’ and equivalent).

Ultimately, these developments show why the protection of personal data is an essential right that needs to be safeguarded in our new, data-rich world, and why it should indeed be construed and regulated differently than privacy. In the context of the heightened conversation in the US on privacy regulation, this distinction is more important than ever.

In both cases, due to the COVID-19 pandemic, exams were not organized. For the first time, final scores were decided by algorithms based on statistical models that took into account not only a student’s past performance, but also:

  • the overall performance of similar exam candidates in previous years,
  • the average scores of their schools, and
  • predictions of how a student would perform if they took the exam.

The result? Emotional despair for many students who saw their average grades downgraded even by two points and saw their conditional acceptance to colleges and universities withdrawn.

In the UK — given that the entire population of high-school seniors was affected, the unfairness of the situation was perhaps more obvious and, in any case, immediately visible. Almost 40% of A-level grades were marked down from initial teachers’ predictions based on individual performance in class.

In addition, the algorithm used resulted in students from public schools and lower socioeconomic background being most likely to have downgraded results based on historical data, while those from private schools were more likely to keep their predicted high scores. This led to social unrest and protests after the results were published on August 13.

After initially resisting demands to cancel those results, authorities decided four days later to scrap the algorithmic-based scores and count teacher-predicted grades as final.

In the IB case, which is affecting 175.000 students all over the world that participate in the program, including in the US, the final scores were announced early July and, similarly, resulted in an outcry from students. An online petition that gathered more than 25.000 signatures as of this week calls for justice and writes that ‘so far, what we’ve seen is a great deal of injustice; many students around the world received significantly lower final grades than what they were predicted’. This affected their conditional admissions to colleges. Just like in the ‘A-levels’ case, it seems that the algorithm took into account historical data of schools along with global data for individual subjects. ‘How is that a fair way to assign IB graduates their final grades?’, asks the petition.

Indeed, fairness of this automated decision-making affecting individuals is at the core of an investigation that the Data Protection Authority (DPA) of Norway initiated following media reports about this case. In the preliminary conclusions of its investigation, the DPA considered that ‘it is unfair to base grades on how other students at the same school had performed previously’ and added — ‘all students are different’. Processing personal data fairly is a core obligation of the EU’s General Data Protection Regulation, which was breached in this case. The DPA announced it intends to order the rectification of the grades, but it first gave the IB organization the chance to respond to its initial findings.

What happened with the A-levels and the IB grades is merely a microcosm of automated decision-making, most of it based on algorithms, that is pervading modern society. Credit scores, insurance rates, individual risk assessments for policing purposes, medical treatments — they are all increasingly relying on similar automated decision-making architectures, based in turn on personal data of the individuals concerned, sourced from a myriad of more or less accurate sources, as well as on the personal data of other individuals that are similar to the person affected, with all data fed into opaque algorithms or other automated decision-making processes.

And this all is on the verge of growing exponentially in both use and risks to the panoply of rights, as a consequence of the development of Machine Learning and AI. It is the collective raw emotion of recent high-school graduates seeing their individual efforts vanished and their future in peril that seems to finally show The Algorithm bare naked before the eyes of the general public.

The way data about an individual is collected, accessed, organized and then re-used to make decisions about that individual or others, to place them in society, assign them a score, screen their job applications or decide their final grades should be regulated and based on comprehensive rules underpinned by the need to ensure fairness, or equity, or justice.

This does not have to do with privacy understood as intimacy or confidentiality, which remains as valuable as ever. This has to do with safeguards related to how information about individuals, even if that information is not at all sensitive or confidential, feeds databases and automated processing and decision-making.

This differentiation is at the core of the General Data Protection Regulation in the EU and justifies the introduction of data protection as a distinct, self-standing, fundamental right into the EU Charter of Fundamental Rights. It is perhaps time that the US should also recognize personal data protection as a different goal to be achieved than privacy understood in its classical sense, in order to ensure fair collection and use of information related to individuals.

--

--

Gabriela Zanfir-Fortuna
The Startup

Gabriela is Senior Counsel for the Future of Privacy Forum and former legal officer for the European Data Protection Supervisor. PhD in data protection law.