Image for post
Image for post
Photo by Tingey Injury Law Firm on Unsplash

Ethical from start to finish.

Dr J Strudwick
Oct 6 · 6 min read

Buzzwords, trends and fads come and go as quick as anything, and they occur in all areas of life. Even within the field of data science and AI, as new tools emerge and are hailed as the ‘next thing’. But there is a new buzzword, picking up momentum, that is really so much more. Something that will most likely cause a paradigm shift and change the process of how we do our work. Ethical AI.

Now there is some ambiguity in the definition of what Ethical AI actually is, but for me, the best description I’ve found is surmised by the 8 principles given by The Institute for Ethical AI & machine learning (link). In essence, this movement is trying to ensure that any AI system that concerns itself with humans adheres to our moral and ethical standards. I’m not going to talk about all these principles, but I’m going to talk about what’s at the heart of it all, fairness and explainability and its root cause, bias.

Being ethical is something that responsible scientists know intrinsically. During my Ph.D., before I could even begin any analysis, I had to get Ethics approval, even though the data I was using was already publicly available and anonymized. This was a hoop-jumping exercise ultimately, but it made sure my work was ethical from the start. But what has been happening, that has started the ‘Ethical AI’ trend, is that ethical sense/decision making has not been making its way into the AI models being built.

Image for post
Image for post

This isn’t a flaw with the computers/algorithms. What people tend to forget is an AI system follows the exact steps that were programmed into it and makes decisions based on the data that was presented to it. It has no idea that any of these variables could result in discrimination. The variable of Female or Male may as well be red or green. Ultimately any unethical behaviour from an AI system is the sole responsibility of the person who made it.

Let’s be clear from the start, according to google:

Obviously, there are more variables that could add to the list in the definition of discrimination, sexuality, religion for example. But any that could form grounds of discrimination we shall refer to as discriminatory variables. Two points to note before we continue.

· A friend wisely said, “If you want to be perfectly unbiased then you either make no decision or a uniform one.” The essence of what they said is right, by making tailored decisions there will always be some amount of bias or unfairness. We just need to try and make it as small as possible.

· We should be mindful that Ethics is influenced by culture, perception, and society all of which change over time. What was considered to be ethical 50 years ago, may not be ethical by today's standards and even our ethics today may be considered unethical in 50 years' time.

Now, assuming that the intended use of the system is ethical itself, there are two points at which bias can creep into the system. The data collection and the model construction steps. Let us start with the latter and assume that our data is free of any bias. Ultimately, if you have any potentially discriminating variable, it boils down to asking yourself one question. “Is this variable relevant and are there significant differences between the groups?” If the answer is no then do not include it! If there are distinct patterns between the groups, which you can show, and those differences are an essential part of the objective then, treating the two groups differently is justified.

Image for post
Image for post

For example. Consider a model to prioritise patient treatments for an illness, and you have a discrimination variable included. Now say one group is, on average, prioritised more over the other. IF it is because that the group being prioritised is more susceptible to the illness AND you can prove it from the data, then that prioritisation is just. On the other hand, if you can’t show the difference then it is unjust, and that feature should not be included. If this is the case, then the model must be built again with that variable removed.

You could say simply impose a constraint within the model, however, the risk with this is that the bias is pushed away from one feature and put onto another. Applying this to our example, you enforce that on average our second group should fall above a minimum priority level. On exploring the results, you find what has happened is that within the second group, people from a single location are given high priority and the rest are given a very low priority. But on average it still meets the imposed conditions, however, now the bias has shifted onto where the person lives. Alternatively, you could go for modifying the result after its been through the model, but then in reality the model is superfluous and, you might even impose your own unconscious bias. So really the safest answer is to go back and be ethical from the start.

Image for post
Image for post

The former point where bias can come into the system, the data collection step, is probably the harder of these two to address. A fair amount of time we data scientists are working on second-hand data. Unfortunately, there is no solution for this other than exploring the data to make sure you understand it and are aware of any unexpected biases that are present. Or going and collecting all the data yourself from scratch, in an ethical fashion, but in practice, most do not have the time. It falls back on the old axiom of “Garbage in = Garbage out” and that, unfortunately, includes any bias in the data

If the data is from a survey, it could have been done in one area where the demographics were heavily skewed to one group, the person collecting the results could have subconscious prejudices that lead them to only approaching certain groups of people. In the end, we need to make sure that data is representative of the people we are trying to model and there aren’t any unexpected biases. For example, we would expect a survey of customers in a women’s clothes shop to be mostly women. This is would be an example of an expected bias and one that could be acceptable given the situation.

Tools are now being developed that can help with this challenge: FairML, Themis, and IBM Watson OpenScale. OpenScale is really impressive! It comes with a whole host of tools that can help with monitoring bias and help with explainability. Such as monitoring deployed models in real-time and creating alerts if it finds the model has made a potentially biased decision. It can also detect either any potential bias in the training data and watch for any drift over time. But probably one of its most useful tools is that for each call that is made to the model it will provide an explanation of how much each input feature affected the output! Meaning that if you have to explain a decision, it is easily done and you can explain what might help change the decision. Amongst all the possible use cases for OpenScale my personal favourite is that it’s used by the AELTC for the Wimbledon Championships to help create unbiased match highlights!

Ethical AI is here to stay, and it will become a part of the data science lifecycle. We must constantly be aware of where bias can creep in and how to deal with it. When you’re asked to make an AI model centred on people, I hope you remember to be ethical from start to finish.

Special thanks to Junaid Butt who helped me develop my ideas for this post.

Bring your plan to the IBM Garage.
IBM Garage is built for moving faster, working smarter, and innovating in a way that lets you disrupt disruption.

Learn more at www.ibm.com/garage

IBM Garage

Startup speed. Enterprise scale.

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store