Grasping the ethics and politics of algorithms
Algorithms are used to choose the ads we see online, but also to determine who gets insurance, who can board planes, who gets parole, or even who is killed by a drone. This already broad scope of use cases will widen in the next years — from algorithmic diagnosis of diseases to self-driving cars. The growing influence that algorithms and data-driven decisions have on our lives has summoned a wide spectrum of critical voices. Most of them conclude with the request for more transparency and accountability. But this is at best a start for a political and ethical grasp on these technologies — and may be even counterproductive sometimes. Here is why:
1. Opening black boxes is not enough.
Most of the algorithms are inaccessible for those who use them or those who are affected by the results of algorithmic evaluations. Often the image of a black box is invoked to describe this situation. This image comes with the hope that if we opened the black box, we could see what the algorithm does and check if it does what it is supposed to do, or if it does something wrong, e.g. if it is biased.
But access to the algorithm can at best uncover deliberate tampering with the results, like in the VW exhaust scandal. For most data driven applications, however, it is very difficult to predict how the algorithm will react without knowing the data. Furthermore, sophisticated technologies in artificial intelligence and related fields do not just learn once how to classify data and consequently apply these fixed categories. They constantly evolve. Thus, checks that are done in advance before using an algorithm cannot rule out future failures. In these cases, a lot hinges on the data. Even if we could ascertain the neutrality of an algorithm (we can’t, see no. 2 and 4) it would still produce biased results on biased data.
Some methods, like neural networks, which are responsible for many of the impressive recent successes of “deep learning”, are notoriously difficult to scrutinize. We cannot ascertain what they have learned other than watching how they react to data. But we can never make sure if they would not go completely wrong on the next set of data they encounter.
This means that most checks are only possible in hindsight: We need to watch the system perform and we need a good base of data to compare the results with. If all these conditions are met, we can see inherent problems, e.g. the recent uncovering of biased parole algorithms by Propublica.
Accountability must include defining who will be responsible for the failures and problems which no scrutiny can rule out in advance.
2. Some things cannot be done right
Demanding accountability, transparency, or scrutiny implies that we know a right way of doing things, that we have categories that tell us what is wrong or right. For example asking for scrutiny of a biased algorithm implies that one could build an unbiased one. But often that is not the case. Many algorithmic systems are meant to predict the future behavior of people: who will buy this product (and thus should get the respective ads displayed), who will get seriously ill (and thus will be an expensive client for an insurance), who will commit a terrorist attack (and thus should be forbidden to board a plane or to enter the country). This means: we use a person’s features or behavior to predict something the person is not (yet) or has not (yet) done. This implies that there are sure signs of that person committing a crime or buying a product — other than committing that crime or buying that product. This follows a simple logic: People who are X do Y. This is the same structure that bias has: People who are X steal, for example. Just that the X is mathematically more sophisticated in the case of algorithms. But still, in this sense:
Prediction is operationalized bias. It will always overgeneralize, that means there will always be people who fit the criterion X but are not what we look for. No process of scrutiny can avoid that. We have to ask ourselves if we should use such technologies if the consequences are as dire as not being allowed to fly, denied access to countries, or even becoming the victim of a drone strike.
Some readers will now object: sure, ok, but that’s not what happens. Algorithms just estimate risk factors. And then someone else uses them to decide with better information at hand. They are but one element of the decision. This leads to the next problem:
Risk does not work on individuals. Imagine a border control officer operating a sophisticated, Big Data driven system. Upon scanning the passport of a citizen returning from abroad, the officer sees a flashing warning on the display in red letters: 80% likelihood of terror. What does that mean? What is the officer supposed to do? Deny a legal citizen entrance into the country? Arrest? Further checks that infringe on fundamental rights like privacy and freedom of movement? Concerning a person that did nothing wrong in her or his entire life, but maybe will?
This illustrates a fundamental problem of risk assessment: nobody is a 80% terrorist. 80% terrorists do not exist. There are just a few people who plan a terrorist attack and many, many more who don’t. Risk is an invention from trade and insurance. If I sell a million products and calculate the risk that 10% of them will fail, then I can use that estimate to calculate how much profit I have to make with the other 90% to pay the damage caused by such failures. Risk estimates make sense concerning a population that is big enough that it does not matter if one is right regarding a single instance or not. It does not matter if one predicts the failure of a particular unit of one’s product right or wrong if the overall estimate is correct. Risk estimates make sense if one can weigh the part of the population that is risky against the part that is not risky. This is what actuarial thinking is about. Both conditions are not given concerning individuals like the citizen at the border.
Risk estimates for individuals suggest a numerical objectivity which does not have much practical meaning.
3. Discrimination and bias are no accident
The experience of being judged, assessed, sorted, and controlled is not the same for all. White middle and upper class persons living in Western countries do experience such moments much more rarely than their fellow citizens of color, persons with a different citizenship than the country they live in, sans papiers, refugees, poor — and many more that fall on the weaker sides of the many intersecting axes of discrimination that structure our societies. Those who have to undergo such scrutiny rather infrequently and usually without problematic consequences can afford the view that a wrong result or a faulty categorization might be an accident. But the fear of being “accidentally” placed on a no-fly list quite different from the factual exclusion of those who are migrants or refugees and fall under the pattern to assess terrorists. The fear of being “accidentally” assigned a bad credit score is quite different from the structural discrimination of those who cannot even open a bank account . If it where only a problem of such “accidents,” then transparency and scrutiny of algorithms maybe could help. But such processes will never be able to bring into view the structural discrimination sustained by the exclusionary mode of operation of border controls, air traffic policing, credit scoring, healthcare benefits, and many other institutions.
To the contrary, the impression that these processes could be ameliorated by transparently controlled, accountable algorithms might increase the legitimacy of those unjust practices.
4. Human experts are not necessarily better
If we want transparency, accountability, and scrutiny, we need people who do that. But the entire idea of using algorithms, especially in the context of security and the legal system, e.g. when evaluating CCTV footage or deciding on parole, was motivated by many cases in which humans have been biased. There is an entire corpus of results from the social sciences that shows how institutions like the DHS or Frontex, but also big banks or the insurance industry have created their own logic, with its own necessities, dependencies, and aims which are often detrimental to the greater good of society. Unfortunately, that is true for many supervisory bodies as well. But it is clear that only a body of experts could do the work. Transparency cannot be the burden of individuals, since it needs expertise, time, and structural independence. Especially the security sector has a horrible tradition of evading supervisory bodies — lying to parliaments and courts, secret treaties, illegal data collections — or installing rubberstamping authorities like the FISA court. This does not mean that supervision cannot work. But it means that we have to take into account that humans will not necessarily be less biased than algorithms. That is why we have developed several important mechanisms that do not aim at neutrality but rather acknowledge that such neutrality is an unattainable aim and try to mitigate bias instead; e.g. diversity rules for juries and boards, blind reviews and applications, etc.
Mechanisms to mitigate bias are not perfect but they are better than to strive for an ideal of neutrality that in reality just conceals the bias at play.
5. The algorithm is just one part of the puzzle
Why is Google so successful? Because they have the best page rank algorithm? Or maybe because they have a clean and easy approach to interfaces that rendered the internet usable for millions of new users that did not bother to learn cryptic command to type into black terminals? Or because Samsung managed to build these utterly successful smartphones on top of Google’s Android? Or was that success rather due to advertising agencies than to engineers?
Why do we tend to believe that trending topics on Twitter or the stuff in our Facebook newsfeed are important? Because it is selected by a special algorithm? Or because it appears to be related to what our friends read and post? Or just because it is right there where we start our daily use of social media and we seldom find the time or motivation to look anywhere else?
Whatever an algorithm “does”, its effects on the world cannot be derived from what that algorithm itself “does”. It depends on the social setting, on user interfaces, on established practices, on highly emotional or affective decisions — or rather reactions (often carefully crafted and rehearsed with us by the best advertising agencies in the world.) Opening black boxes and scrutinizing algorithms is part of disentangling these complicated relations. But it must not entice to think that the algorithm — or the source code — is the definite instance that gives us a lever to the important ethical and political problems. To the contrary, what we can learn from profound scrutiny of technical systems is:
There is rarely a technical solution to social and political problems. Technology might be helpful in many ways, but only if its interrelations with social and material circumstances are part of the picture.
6. Superhuman Artificial Intelligence is not the main issue.
With recent advances in artificial intelligence research, prominent persons have issued warnings that we must take precaution now — otherwise one day a superhuman artificial intelligence might just turn against its creators. But that issue is not the problem here. Many reputable artificial intelligence researchers admit that it maybe will never be a problem since an intelligence that only comes vaguely close to humans is not on any realistic horizon. But even if it was, discussions on superintelligent artificial intelligences easily distract from the real problems we already have.
The main issue is that we already have all kinds of systems coming out of artificial intelligence research that at best can solve a highly specialized task and have nothing to do with intelligence. But we trust these “dumb” systems to judge human beings, trade stock, and drive cars — among many other things. That is because in the cases in which such systems are applied, something humanlike actually is not desired.
We use such systems not in spite of their limited “intelligence” but for that very reason. We use them because they are seemingly different from humans: they solve the task they are made for, but they do not complain about working conditions, ask for more salary, get tired, have moody days, and bring their prejudices to work. This difference between humans and machines is deeply engrained in our world-view. Humans are seen as subjective, emotional, embodied beings, and often as prejudiced. To the contrary, machines appear as neutral, rational, objective, functioning 24/7. This dichotomy is wrong on both sides. A lot of serious research has shown how much emotions, affects, and social structures are an inherent part of what we commonly call the rational faculties of human beings. On the other hand, machines are constructed by humans, with a certain social setting in mind (see no. 5) and manifesting a significance of numbers that often is not warranted (see no. 2).
Rather than casting the machine as the effective — and ultimately fearful — opposite of humans we should enquire how this difference comes into being in the first place and what it does in our societies.
Opening black boxes is not enough because algorithmic scrutiny is difficult to impossible to achieve, and even if we could do this we could not get it right, and humans are no better alternative either. This means that we have to drop the hope that transparency and accountability will make algorithmic systems the objective or neutral tools we always wanted them to be. We have to make explicit the inherent problems of such systems and either consciously accept and mitigate them — or give up algorithmic decision making for areas where the consequence are just too far-reaching.
It is essential that we work with the overall technical and social picture (and here opening black boxes and transparency have their place). And we should focus on the current developments on algorithms and the problems they already present to our societies — rather than being detracted by superhuman artificial intelligences.
 Thanks to Michael Nagenborg for pointing out the relevance of this issue.