Ethical Algorithms: How to Make Moral Machine Learning

One day we might have algorithms that are more ethically responsible than us. They might have the power to work through enormous data sets and filter out all training data that would engineer unethical consequences in other algorithms. Until that day comes, what can be done to minimise the danger of machine learning-trained algorithms mimicking our bigotry?

Dan Corder
QDivision
5 min readJan 16, 2018

--

Implementing machine learning algorithms often leads to cringeworthy, unethical consequences. Two fundamental elements of machine learning become serious problems when combined and lead to horror stories of companies and governments unknowingly creating racist, sexist and otherwise immoral technologies.

“Programs developed by companies at the forefront of AI research have resulted in a string of errors that look uncannily like the darker biases of humanity: a Google image recognition program labelled the faces of several black people as gorillas; a LinkedIn advertising program showed a preference for male names in searches, and a Microsoft chatbot called Tay spent a day learning from Twitter and began spouting antisemitic messages.”

The Guardian

Machine learning programs have also been proven to have shown bias against black prisoners and associated women with jobs of lower status and pay. Advertising led by algorithms has suggested home security improvements to those who frequently type names that are more commonly given to people of colour.

The Two Elements that Create the Problem

Here’s the first problem with machine learning: it impersonates people. Its whole purpose is to process data such that the algorithm that is developed from the data can fulfil some functions that humans can, like recognising faces, or translating from one language to others, or discovering patterns and grouping data by common elements. Yet, data that represents humans naturally reflects humans’ biases too, and bigotries.

Here’s the second problem with machine learning: it generates algorithms that are too complex for people to fathom. The core idea is that humans’ brains are ridiculously complex and human brain functions like recognition or decision-making are influenced by a myriad of factors, and most of these complexities and influences are beyond our ability to recognise and understand. Sometimes you see someone from a distance, and immediately recognise who they are, even though you cannot voice what you see that caused you to recognise them. Each of us does not know how much we know. Before machine learning, we could not write algorithms that competed with our brains because we could not mirror the extraordinary capabilities of our brains in the algorithms, since we can not understand everything that is going on inside our heads. Machine learning removes that issue, because it generates algorithms through learning from encountering millions of examples, just like humans do. But this means that the algorithms that are developed are just as inexplicable to us as our brains.

When combined, these two elements culminate in algorithms that act in bigoted ways. The algorithms are developed off of data sets that usually have millions to hundreds of millions of pieces of information on people, and come to represent all of these people and, often, even their most subtle or unconscious, bigoted beliefs. And the algorithms are so dense that it is not generally possible to preempt or spot potential for immoral consequences. So evil consequences need to manifest, and then be spotted and reported before corrective action can be taken. Any response is reactionary, after some damage is done.

New Expectations on Companies to Act Responsibly

Widespread media coverage of algorithms gone wrong has led to expectations that companies act more responsibly. Some organisations have tried empowering regular users of their products to be watchdogs who are tasked with reporting discriminatory behaviours. Others have diversified their development teams in the hope that having more women, people of colour and the like will lead to better selection of data sets, since it is more likely that people who suffer bigotry will notice that same bigotry in the sets. So women are definitely more likely to spot sexist trends within data that is used to train and generate algorithms than men.

Geoff Nitschke is a senior lecturer in the Computer Science Department at the University of Cape Town. He agrees that “pruning the data sets is probably the key way” of mitigating harmful consequences of data-trained algorithms. With millions upon millions of pieces of data, it is not reasonable to expect a team of humans to be able to audit whole sets and remove all of the bigoted data pieces. But one day soon we may be able to use a machine learning algorithm to do just that.

The Future Solution May Be Algorithms Trained For Morality

Geoff cites exploraty research in leading development labs around the world that is investigating whether we can develop algorithms that “learn to be ethical”. In Geoff’s words, “You and I have grown up in a certain type of environment. We have learned our ethics through years of interaction, and the idea is that the machine learning algorithm would do that, but do it at a much faster pace, as it is receiving millions of inputs every second. It has a huge amount of sensory processing power. It would learn this model type of ethical behaviour from repeated exposure to what is ethical and what is not, according to our own standards. Then, whenever new data comes in, (the algorithm) would classify it as a good example for training or a bad example for training”, based on whether it contains bigoted biases. So algorithms that can distinguish between ethically-appropriate data and unethically-appropriate data may one day be used to filter data before it is used to train algorithms that have other purposes.

This research is so exciting because it has the potential to remove destructive elements from algorithms before they have harmful effects, and would remove the need for reactionary attempts to treat algorithms after they already have caused harm. Ironically, one future solution for the problem of machine learning algorithms may be more machine learning algorithms.

Like our ideas? Visit our website and keep up to date with all that is Q Division on Twitter and LinkedIn.

--

--

Dan Corder
QDivision

Inspiration to Consider | Digital Content for Q Division | Digital Product and Service Design | Tw/IG:@DanCorderOnAir