Six principles for AI ethics

Even as “ethics in AI” gains space on the AI agenda, with thought piece after thought piece on the existential threats posed by super intelligent machines, there are still some glaring gaps in the mainstream conversation on what the actual end game of ethical AI might in fact be. To that end, here are 6 principles technologists should adopt and incorporate into their practice.

Hanna Naima McCloskey
Fearless Futures
8 min readJun 15, 2018

--

ONE. Any ethics worth encoding into our technology must be about prioritising justice. Not all ethics are anti-oppression or justice oriented. It’s important we get that cleared up.

The below is an extremely useful image to understand the relationship between justice, equity and equality in terms of outcomes of actions we may take in the world.

Source. Also, we would argue that the “Justice” image would be more accurate if the metal railings weren’t there so the person on the right could better have access. Perhaps a perspex wall instead?

The premise of the image above is that advantages and benefits of society (by definition — because to receive a benefit means someone or some group is disbenefited, and to have an advantage means someone or some group is disadvantaged) are not distributed or afforded to all people equally.

Therefore, if we want to harness AI to elevate the best of humanity, we need to enable teams to deploy an analytical framework that is rooted in analysis and action that solves for equity (as an intermediary step), belonging, justice and inclusion in their problem solving as they iterate towards a final product. We at Fearless Futures call this Design for Inclusion. Part of this analytical framework must include:

a) a strong and robust understanding of interconnected oppressions;

b) how they manifest and operate as historical processes and how they are presently lived;

c) understanding and accountability for and of our own privileged positions as designers in relation to these histories and presents and how that shows up as a potential blocker to our end goal unless designed out.

Part of this analytical framework will also be about understanding that who is in the team (and who isn’t), who is heard (and who isn’t) is also an input into what we are programming.

TWO. History is important. When we are creating technology that generates the future, trained on information from the past, we need to know our histories of oppression. And if we don’t, we need our analytical framework to assess what we should be centring. Predictive policing technologies highly popular in the USA (and gaining traction here in the UK) have been heavily critiqued for reproducing and amplifying racism. Police brutality is a tragically normalised feature of existing as a Person of Colour in the USA and in the UK. However, this present reality is part of a continuum of practice. For example, in the USA, policing started in the South as a “Slave Patrol”. As this rich article details: “The first formal slave patrol was created in the Carolina colonies in 1704 (Reichel 1992). Slave patrols had three primary functions: (1) to chase down, apprehend, and return to their owners, runaway slaves; (2) to provide a form of organized terror to deter slave revolts; and, (3) to maintain a form of discipline for slave-workers who were subject to summary justice, outside of the law, if they violated any plantation rules. Following the Civil War, these vigilante-style organizations evolved in modern Southern police departments primarily as a means of controlling freed slaves who were now laborers working in an agricultural caste system, and enforcing “Jim Crow” segregation laws, designed to deny freed slaves equal rights and access to the political system”.

In the UK, in a parallel context, we may turn to the “sus laws”, the informal name given to stop and search policy, rooted in the 1824 Vagrancy Act, specifically section 4. It gave the police powers to search anyone suspected (hence “sus”) of having an intent to do something criminal. In such a world, standing still in public space could be (and is?) justification for people of colour being stopped and searched by the police. In a world in which black people are constructed through the dominant, white lens as dangerous, with the resources of the state used to curb their being, we might rather focus our efforts on ending the material impact and danger these policies have on their lives. Despite the policy formally ending due to efforts of the Scrap Sus campaign led by Mavis Best, Black people are at least 8x more likely than white people to be searched for drugs in England and Wales, though drugs are less likely to be found on them. What does this mean for the training data our “predictive” machines might be trained on?

A historical lens provides the framework for analysis for what questions we might need to ask of the data in order to programme such predictive technology for equity rather than a continuation of a brutal past and status quo into the future.

When we come to emerging technology with this historical lens, we might ask: Who specifically was the system invested in policing and who was it not? Indeed, as Cathy O’Neil remarks in her book “Weapons of Maths Destruction” predictive policing is notably not generating conclusions for police departments that has them allocating their resources outside of banks in the City of London or on Wall Street. The people engaging in acts that bring down whole economies are not those the machines consider people who need to be policed. The algorithm has not been set to curb their actions. What we even classify as a crime is central to predictive technologies such as this. Furthermore, we may wish to also ask whether our existing criminal justice system is itself worthy of being “optimised” or whether we might wish to explore alternative paradigms – outside of prison – altogether.

THREE. Google AI’s principles came out the other day stating they won’t engage in: “Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints” (my italics).

While this might sound positive it leaves much to be desired. For starters: What is harm? Who decides? The very problem at the heart of injustice is that those who are subjected to power structures are not believed, are not heard, are dismissed and experience violence by those who benefit from the very same power structures. I am calling this the paradox of power (and privilege). Which is to say that the very people who have the power to take action to end injustice (many of whom are those programming our machines) are precisely those who because of their positionality are:

  • oblivious to the other side of their experience
  • invested in the status quo because it serves them
  • have been implicitly and explicitly trained to preserve their position

And so have been, in a way, programmed to use their power to maintain it.

One approach to limiting harm (whatever it may be) is to outline the “risks” or probabilities of harm from a particular tool. This is an example of a document doing so for a sentencing tool being trialled in Pennsylvania. However when “risks” of harm are raised, we know that one tool of perpetuating oppression is to either justify or excuse them, which leaves the harmful impact in place. To counter this pattern of behaviours, we require those with structural power to be invested in generating outcomes of equity and justice.

One way to build in a mechanism to this end would be to reframe the burden of proof so that those designing these technologies have to prove that they do not perpetuate oppression on any communities before they can move forward with their product.

After all, without this re-framing, we know that what is a ‘risk’ of inaccuracy for one party is a material outcome that could be the difference between life or death for the other. And disappointingly, such risks are often taken, especially in pursuit of profit.

FOUR. We simply must insist on using the language of oppression rather than bias when it comes to AI and emerging tech. The short version: because without a framework for analysing structural power we are missing the point and going nowhere. The longer and more nuanced version: read my colleague Sara’s article here. For more on these concepts Dr. Safiya Noble’s “Algorithms of Oppression” is terrific.

FIVE. We need as much focus on how we teach technologists as what we are teaching them to do when it comes to designing inclusion into AI. What makes for sustainable and transformative education?

What knowledge do we need to transform unequal power relations in our technology?

If we focus on knowledge alone, with no attention to how people do deep learning about the nuanced and complex material at the centre of anti-oppression and social relations, it will likely result in the soil in which technology companies grow being unaltered. This is because how, what, why and who we decide to prioritise in our technologies is also a function of the ways we are with one another in the process of designing. Are we optimising in these contexts for a subversion of power dynamics amongst ourselves and centreing those whose voices are marginalised to discover different knowledge? If we return to the Paradox of Power from earlier, we might wish to conceive of ourselves as having been trained on data throughout our lives that preserve the structural power we have. The question for those at the forefront of emerging technology is: are you up for unlearning so that your technologies may work for justice? Are you up for disrupting that status quo?This may feel like the fluffy stuff within our hypermasculinised technology environments, but it is extremely hard. And extremely necessary. In short, the vicious cycle will perpetuate unless we are able to generate new cultures of being and doing within technology.

SIX. There is a loud chorus getting behind transparency as a solution to the absence of ethics in AI. The logic is that if we can see what’s going on, bad stuff won’t happen. We aren’t so sure. Transparency will only get us so far, it is not a panacea. After all, even when alarming truths are exposed, we know that power responds by acting to preserve itself. A simple example might be the gender pay gap reporting recently instituted for companies with over 250 employees in the UK. Yes, the information is out there, but we saw pretty quickly how the status quo was defended by companies (often by people simply using an explanation of the gender pay gap, bizarrely).

The relationship between employees at Google regarding their company’s pursuit (now successfully terminated due to employee action) of Project Maven (a programme that uses machine learning to improve targeting for strikes) is also a good example of this. This article here details the long battle between tech workers and management. Tech workers had to build power to challenge management. Some resigned in protest. The knowledge of the project alone did not in and of itself deliver change. In fact, management actively contested their employees’ resistance to Project Maven. Justifications and excuses were made to maintain the contract. Unless we are also able to accept that growing capabilities to organise and build power is essential alongside transparency, within and outside of tech firms, transparency alone may just be another tool of distraction from those with power. What seems much more powerful is for principles of justice — as explored in points 1–5 — to be encoded into organisations’ fabrics in the first place.

Conclusion: these are 6 principles that we believe must be centred by technologists in AI. This starts with answering yes to this question: when you are designing the future, can you do everything to disrupt oppressive histories and presents from repeating themselves and instead imagine a just and humane world — and then design your team cultures and products for that instead?

With heartfelt thanks to @natalieisonline and Sara Shahvisi our Director of Programmes for their brilliant reflections and feedback that better informed the piece.

--

--

Hanna Naima McCloskey
Fearless Futures

CEO @ Fearless Futures. Educator. Innovator. Design for Inclusion.