Who’s the fairest of them all? Not AI

Jivan Virdee
Design Voices
Published in
4 min readNov 8, 2017

--

AI is the future, but it is intrinsically tied to our past — how do we expose human bias in machine learning?

Organisations are increasingly adopting black box models to automate certain processes and assist decision making, which will impact how we work and live. Today, there are models to help decide whether you get hired for a job or get a mortgage, whether you’re eligible for parole and how much your health insurance costs.

Despite implementing these models with good intentions, they are inevitably tied to human biases. Machine-learning algorithms tend to rely on existing data, which embodies the social, economic and political environment in which it was created. As studies have shown, even our language — both the meanings and context of the words we use — contains our implicit biases. For instance, Google Translate has been found to take gender-neutral English words like professor and doctor into the male form when the word has been translated into other languages, while words such as nurse are auto-translated into a female form.* A paper published this year by Princeton University and the University of Bath showed that machines learn and mirror human associations between words — from the innocuous association between pleasantness and flowers, and unpleasantness and insects, to the more unpalatable association between male and female names and careers.** As a result, AI systems might be perpetuating historical biases that would now be deemed unacceptable.

You might have heard of highly publicised cases where AI systems have gone rogue, made embarrassing mistakes, and revealed our ugly human prejudices, such as Google Photos’ racist tagging catastrophe and the incident with Microsoft’s Tay chatbot. Issues and incidents such as these reveal the imperfections of AI, and the problems that can result.

Algorithms aren’t perfect — and neither are people

The first step of dealing with bias in AI is to take responsibility for it. As creators of AI, the systems we make are likely to embody our biases. We need to work towards identifying and eliminating these in order to ensure that AI systems of the future reflect the more ethical society that we aspire to be.

Algorithms are mathematical; they follow their instructions exactly. This may lead to a misguided conclusion by the general public that algorithms are objective and perfect. However, because it is currently people who choose the data and train the models — telling the system what is relevant and what outcomes we want — this is not the case. As a result, there can often be unintended consequences if they aren’t designed to account for bias and misinformation.

Algorithms tend to display over-confidence; even if an algorithm is unsure, it doesn’t always admit it, contributing to the public perception of their mathematical perfection. Again, this is a design consideration, as confidence levels can be calculated and exposed. Algorithms need to become more humble to help the user understand that, just like us, a system can be unsure.

The good news: We can fix this

We have a responsibility to consider any potential biases and how we might account for these when we are designing AI systems. During development, we can directly test our systems, looking at how predictions differ based on known biases.

Addressing bias will also involve algorithms becoming more explainable in terms of how they go about making decisions and how confident they are about the conclusions they’ve come to. Explaining a decision process will help us to identify how biases might creep into the system — and enable us to start working to correct them. Furthermore, when an algorithm displays humility around a decision, perhaps we will feel more comfortable questioning it.

Realistically, it is inevitable that bias will arise in our AI systems at times, so it should be the default assumption that our systems are biased and our default expectation that we should go to greater lengths to either demonstrate otherwise or mitigate where necessary.

By accounting for bias and questioning our models and ourselves, we can work towards designing AI systems that evolve and learn to help us create the fairer society we strive for. However, further questions remain. As part of designing better, fairer AI, we need to consider whether we should train our algorithms according to how the world actually is or how we would like the world to be.

If choosing the latter, we need to consider whose vision of an ideal future we follow — and that might not be so easy.

*https://www.fastcompany.com/3010223/google-translates-gender-problem-and-bing-translates-and-systrans

**http://science.sciencemag.org/content/356/6334/183.full

Image: https://pixabay.com/en/robot-technology-2033898/

--

--