The case for fairer algorithms

DeepMind Ethics & Society
6 min readMar 14, 2018

--

by Iason Gabriel, Research Scientist

Fresh evidence that algorithmic decisions are often deeply affected by bias raises profound questions for technologists and society alike. At DeepMind we’re committed to addressing these matters head-on, building inclusive technology that works for all.

Credit: Shutterstock

The right to be treated fairly is a bedrock of democratic society. But a major barrier to achieving this goal stems from human fallibility — from our susceptibility to prejudice and bias. One recent study found that judges are somewhere between two and six times more likely to grant parole if they hear a case early in the day rather than at the end. Evidence of discrimination in the job market is also widespread. In the United States, applicants with white-sounding names received 36 percent more call-backs than those associated with African-Americans.

This is an area where researchers believe algorithmic decision-making could have significant potential. Properly calibrated, algorithms could help humans make more informed choices, processing larger amounts of information and identifying bias when it occurs. Yet recent evidence suggests that, far from making things better, software used to make decisions and allocate opportunities has often tended to mirror the biases of its creators, extending discrimination into new domains.

Job search tools have been shown to offer higher paid jobs to men, a programme used for parole decisions mistakenly identified more black defendants as ‘high risk’ than other racial categories, and image recognition software has been shown to work less-well for minorities and disadvantaged groups. To bridge the gap between ideal and reality, a better understanding is needed of how bias enters algorithmic decisions, how it leads unjust outcomes, and how it can be addressed — including occasions when algorithmic decision-making is simply not appropriate.

What is Algorithmic Bias?

The idea of ‘bias’ has been understood in several ways. In statistics, researchers consider a dataset or sample to be biased if it differs systematically from the population it aims to represent. In ethics, as in everyday language, a decision is commonly understood to be biased if it fails to treat people fairly. In both cases, bias involves insights that are partial or one-sided, which then lead people to make mistaken decisions.

With algorithms bias arises for a number of reasons. To start with, the data used to train machine learning models is often incomplete or skewed. By underrepresenting or excluding certain socially marginalised groups or subgroups, this kind of ‘sampling error’ leads to poorly calibrated products which intensify rather than counter marginalisation.

Furthermore, even when data is not statistically biased, it frequently contains the imprint of historical and structural patterns of discrimination. This is true of language, which often contains prejudicial associations between certain words — for example between gender and job type — that algorithms learn and reproduce. These patterns represent a particular challenge when it comes to creating datasets that are both balanced and representative.

This problem also arises with data about social phenomena such as employment or the criminal justice system. Indeed, even if statistically unbiased and properly coded datasets can be created, they may still contain correlations between gender and pay, or race and incarceration, which stem from entrenched patterns of historical discrimination that most countries have sought to overcome.

Against this backdrop, it would be a serious mistake to think that technologists are not responsible for algorithmic bias or to conclude that technology itself is neutral. After all, even when bias does not originate with software developers, it is still repackaged and amplified by the creation of new products, leading to new opportunities for harm.

Extending Fair Treatment

One of the most problematic things about these cases, is that algorithms tend to further penalise ‘protected groups’, compounding the disadvantage they already experience by further limiting access to jobs, education, credit, healthcare and equal treatment before the law. But, unfortunately, the solution is not simply to remove information about protected categories from the data.

From a technical point of view, we’ve found that even when explicit information about race, gender, age and socioeconomic status is withheld from models, part of the remaining data often continues to correlate with these categories, serving as a proxy for them. A person’s postal code, for instance, tends to reveal much about their protected characteristics. Directly removing information about protected attributes therefore does little to shield people from discrimination — and may even make things worse. Commenting on this problem, Silvia Chiappa, a research scientist here at DeepMind, observes that ‘information about group membership is often needed to disentangle complex patterns of causation and to protect people from indirect discrimination.’

There is also a moral argument that the present focus on protected categories doesn’t go far enough. A number of political theorists have suggested that it’s unfair to penalise anyone on the basis of characteristics they possess through no fault of their own. It would be wrong, according to this view, to make hiring decisions on the basis of a person’s gene sequence or other factors that are only now coming to light.

Finally, from a sociological point of view, pioneering theorists like Kimberlé Crenshaw have shown that patterns of discrimination intersect with each other, placing particular burdens on groups such as immigrants or single-parent families who conventionally fall outside the ‘protected category’ framework. To address the obstacles they face, it may again be necessary to think about algorithmic fairness in new, more inclusive, ways.

What can be done to address these challenges?

We believe that the following measures are needed:

  1. Be transparent about the limitations of datasets: technologists should be required to make upfront disclosures about the composition of datasets used to train software, alongside an evaluation of the biases they may contain. In some cases, the extent of bias in available datasets may mean that algorithmic decision-making is simply not appropriate.
  2. Conduct research and develop techniques to mitigate bias: We need standardised tools to identify algorithmic bias whenever it occurs, as well as new techniques to counteract its effects. Rooting out bias requires us to be vigilant about the various ways it can creep into our work from the very start and develop new ways to combat it.
  3. Deploy responsibly: algorithms should only be deployed when they have been closely scrutinised for bias. Ensuring fairness and public benefit must be a top priority from the start. And those responsible for their development need to demonstrate that problems won’t arise, or that they possess the ability to address them if they do.
  4. Increase awareness: Universities and civil society are now rallying to the cause, bringing the problem of algorithmic bias to public attention and increasing accountability in the field. Since people don’t have direct knowledge of other people’s experience using products and services, they often can’t tell whether they’ve been subject to algorithmic bias. These organisations therefore perform a vital watchdog function, making it harder for technologists to trivialise the problem, and driving forward the search for innovative solutions.

At DeepMind we believe it is important to take responsibility for the technologies we create, which is why algorithmic fairness is a key ethical challenge that our new research unit aims to address. We recently published our first research paper on this topic. And we’ve funded external efforts like those undertaken by AI Now, sponsoring several new postdoctoral research positions through unrestricted grants under the supervision of Kate Crawford and Meredith Whittaker, leading researchers in this field.

This research will contribute to our understanding of problem and potential solutions, but certain principles are already clear. We need new standards of public accountability that allow all corners of society to hold those developing and deploying algorithms responsible for their effects. And we need technologists to take responsibility for the impact of their work — not after — but before it’s deployed. By working closely with civil society and the wider research community, we aim to develop technology that is socially embedded, accountable, and above all, fair, ensuring that it brings real benefit to people’s lives.

DeepMind Ethics & Society was created to research and help address six major ethical challenges facing the real-world application of AI. This is the first in a series of blog posts exploring these challenges in greater depth.

--

--

DeepMind Ethics & Society

A research unit studying the ethical challenges confronting AI and its real-world impacts