Don’t Blame It On The Algorithm!
What the A Level algorithm can teach us about fairness
Dr Mhairi Aitken, Senior Research Associate, Newcastle University
It’s the end of August, a month which I would normally spend in a constant state of giddy excitement and nerves as I prepare and promote my Edinburgh Fringe show. Last year my show was titled “Blame It On The Algorithm!” sadly there is no Edinburgh Fringe this year, but with the recent public outcry over the English A Level results, it feels more relevant than ever.
Blame It On The Algorithm was a provocative discussion of the role that algorithms — and Artificial Intelligence (AI) — are increasingly playing in all our lives and the impacts they are having on society. The show was part of the Cabaret of Dangerous Ideas, but in the current context, the title suddenly doesn’t feel dangerous enough; perhaps at next year’s Fringe I’ll borrow from the A Level students’ placards and instead go with “F*ck The Algorithm!”
But, of course, the problem with either of these titles is that it isn’t really the algorithm we should be blaming. Whether it’s downgrading hard-working students’ exam results, telling police to focus their attention in areas with predominantly black communities or, denying someone’s benefits claim, the algorithm is just doing its job. The algorithm is a fairly diligent and conscientious worker, but unfortunately it is all too often put to work in politicised, prejudiced, or even corrupt systems. The algorithm doesn’t have a conscience, moral compass or reasoning. When it recommends a person is denied a visa application or stopped by the police, it is simply following orders and in doing so reproducing the biases and prejudices of those who programmed it.
Much of the discussion in the days following the A Level results centred on the subject of fairness. The pursuit of fairness was given as the rationale for using the algorithm in the first place, but was also the main concern about its outcomes. Ofqual had been concerned that basing grades on teachers’ predictions would lead to inflated results which would be unfair for students in previous and subsequent years. However, the result was that 40% of students had their marks down-graded in many cases substantially. It was all the worse since the students whose marks were downgraded were most often from more disadvantaged or less privileged schools while private school students typically emerged unscathed. There really is no way of looking at that outcome as anything other than grossly unfair.
It was this clear injustice that drove the public response, and in particular the empassioned and articulate protests of students across the country.
A lot can be learnt from this for people working in developing algorithms, particularly those interested in ethical algorithms or ethical AI. For those whose only awareness of AI and algorithms comes through the steady succession of news stories around unfair outcomes and misuses of personal data, they might be surprised to know that ethics has been a major focus of attention for people working in this area over the past few years. In particular, there is no shortage of research teams and organisations working on understanding and pursuing fairness in the ways that algorithms are developed and deployed.
One of the big problems in this area — and one which has held things back — is that there is a tendency to focus on abstract principles, or technical terms rather than real world experiences. If you ask a computer scientist what fairness means they may well tell you that is difficult to say because there are more than 20 mathematical definitions of fairness to choose from. Of course, if you were to ask a philosopher they would most likely have several hundred more definitions to put forward. The same is true of many other terms that are bounded about in this field: transparency; accountability; autonomy… And therein lies the problem: We can spend years — and millions of pounds — debating and refining key terms without ever improving the way things work and without ever making people’s lives better.
But while “experts” agonise over defining and refining principles for ethical practice, or develop ever-more complex systems to identify and quantify bias and/or (un)fairness, most people rely on their real world experience and instincts to know that some things — such as the algorithmically determined A Level results — are just patently, unquestionably unfair.
That is the kind of real-world experience and insight which is needed to shape future approaches to ethical use of algorithms and AI. Had Ofqual spoken to students, teachers and parents in the early stages of developing their algorithm they may have been able to anticipate and address some of these problems much earlier. They may have also found ways to do things better.
Ethical approaches to algorithms should not be construed as being all about avoiding harm or negative impacts. It is also very importantly about identifying ways of maximising value and benefits across society. Doing this requires taking account of the views, interests and experiences of all groups across society — not just the privileged and all too often homogenous groups developing algorithms and policies around them. It requires professionals working in this area to have humility and admit that they don’t have all the answers and that they could really learn something from listening to new ideas. Preferably ideas which come from as far away from their own professional community as possible.
Of course the common response to this is that algorithms and data science are too complicated or technical for members of the public to understand. That’s a convenient excuse which is often touted to keep scientific and policy processes following business as usual. From the six years I have been speaking about AI and data science at the Edinburgh Fringe I can tell you people get it. It’s not that hard. You don’t need to be able to write code or decipher the inner workings of an algorithm to understand what it does or the impacts that algorithms are having in our lives and on our society.
There’s one further level of unfairness which has received comparatively little attention but which shows very clearly the priorities and interests of UK media and institutions. While the A Level results were released on the 13th August, followed by four days of protest and controversy before the government performed a U-turn and reverted to teachers’ predicted grades, students of vocational qualifications, such as BTECS, who had expected their results on the same day were left waiting, and are still waiting as I write this more than a week later. There has been no furore, no outpouring of concern for the distressed students left in the dark without information or updates on what is happening. Silence. Students who worked hard, took unconventional — in many cases challenging — routes through education are losing out on hard-earned university places which are being taken up by A Level students who have already had their marks awarded (twice).
It’s a familiar story. The controversies around data and algorithms often focus on particular details (e.g. how biased the data was) and obscure the wider picture which can tell us a lot more about whose interests are being prioritised and whose are not: why do we prioritise A Level students over BTEC students? Why is it considered ok to use predictive policing algorithms to detect street crime but not white collar crime?
The list of possible examples I could draw on is growing all the time. We really didn’t need another case study of thoughtless deployment of algorithmic decision-making, but nevertheless there’s a lot we can learn from this one. We need to learn not to blame it on the algorithm but to look deeper at how — and why — algorithms are being put to work. And we need to do much more to ensure that wider public voices and experiences are involved and reflected in all future development, deployment and evaluation of algorithms, so alongside mathematical modelling and statistical analysis we have common-sense and lived experience. We need people from the widest possible range of perspectives to be able to say “that’s not fair”, before those algorithms are out in the real world doing real damage.
So, when the Edinburgh Fringe returns next year I’ll be needing a new name for my show. Blaming it on the algorithm only makes the algorithm a convenient, and distracting scapegoat for much wider, more systemic problems. The challenge is that we need a much broader discussion of the systems in which algorithms are being put to work and the ways in which they are reinforcing — rather than challenging — existing inequalities. That broader discussion needs a wider set of public conversations… this is just the start!!
Dr Mhairi Aitken,
Find me on Twitter at: @Mhairi_aitken
Or YouTube: https://www.youtube.com/channel/UCMFHHhrvIaHsASyztIH2zjA