I Think the AI Bias Is Ill-Defined, Here Is Why…

Our general definition of bias doesn’t help much in the context of AI, where a broad set of challenges for different types of users exist.

Anand Tamboli®
tomorrow++
6 min readJun 22, 2019

--

The ethical framework for any technology as such has been one of my favourite and close to the heart topics. A few weeks ago, I was writing in response to the consultation on Australia’s Ethics Framework for Artificial Intelligence, which was invited by the Department of Industry, Innovation & Science.

One of the critical section was obviously around bias, which was linked in defining fairness principle for AI.

The draft paper proposed fairness as one of the core principles for AI explained it somewhat like this: The development or use of the AI system must not result in unfair discrimination against individuals, communities or groups. This requires particular attention to ensure the “training data” is free from bias or characteristics, which may cause the algorithm to behave unfairly.

However, the paper did not prescribe any approach for validation of this fairness and non-existence of bias.

If you dig deeper to understand bias and fairness as such, you would soon realise that the definition of bias may not be wholesome and we are possibly missing out something.

Apparently, it may become quite convoluted to define what the absence of bias should look like, let alone having some kind of formulae or equation that can be fed to an AI/ML algorithm.

Let us have a quick look at the base definition of bias

As the Wikipedia definition says:

Definition 1. Bias is disproportionate weight in favour of or against one thing, person, or group compared with another, usually in a way considered to be unfair.

Biased means one-sided, lacking a neutral viewpoint, or not having an open mind. Bias can come in many forms and is related to prejudice and intuition.

Definition 2. In science and engineering, a bias is a systematic error. Statistical bias results from an unfair sampling of a population, or from an estimation process that does not give accurate results on average.

Here, our description of interest would be the first one. But let me emphasise that the second definition, for machine learning and hence AI is also remotely related and of our interest.

We are already biased about bias!

The bias that is usually talked about is only of a fashionable kind. Perhaps because it gets the media attention and is often quite offensive — racial bias, age or gender-related bias, etc.

But we fail to talk about all other kinds of biases that our experiences and eventualities load upon us: our perceptions, reactions, literary preferences, technological choices and many more.

Ironically, we talk about the bias in a biased manner!

Can you think of other kinds of biases?

When designing an AI system, we tend to forget biases that do not directly affect machine learning, but the final output is still not suitable (and biased) for the end-users.

1. Technology adoption

Not everyone is the user of a high-end smartphone or a top-notch computer. Sometimes, it is just a matter of choice. I do not like a few social media options and hence stay away from them — should I be disadvantaged due to that from any benefits which otherwise I could be entitled to?

For example, if someone consciously doesn’t want to be on LinkedIn, do they lose the right to apply for a corporate job? Should applicant tracking system (ATS) ignore their applications (or deprioritise them) on this basis? Should recruiters not try to understand the underlying reason for not using LinkedIn before making any decision? Would you consider someone to be anti-social if they don’t use Facebook? Or, boring if they are don’t use Instagram?

Would an AI solution that ranks someone based on these aspects be considered any less biased?

2. Economic backgrounds

It is a common scenario where the availability of technological resources to the people is a function of economic background or their financial ability.

For a particular AI app that might need to transfer a vast amount of data to the cloud before producing any meaningful output, would my basic handset with a low-cost mobile data plan present a challenge? Would I be denied any service or get a subpar solution outcome, just because I couldn’t afford a larger data transfer plan or a high bandwidth connection? Not everyone may be able to afford 5G connection at all times!

Do we even consider this to be a factor when designing an AI solution?

3. Regional aspects

For several reasons, users may choose to live in a particular region. Someone may like living country-side or in the wilderness, or someone may be living in a high-rise building, and likewise. This could happen due to a conscious choice or maybe because there was no option left.

However, I have noticed that the interest rate for home loans often factor in the regional aspect. If someone is living in high rise building, which is also considered to be high-risk property, the interest rate might differ (could be higher) as compared to a freehold property. Should one be penalised for their living location preferences, regardless of their ability to pay?

Here is a classic example of the existence of such a bias: https://www.bloomberg.com/graphics/2016-amazon-same-day/

4. Dependencies & special needs

How about children or older people who are using technology? There are often accessibility requirements from an infrastructural perspective, but none of that applies to the technology. How often do you come across a technology that is easy to use by all age groups?

What about people with disabilities or physical limitations — shouldn’t this be accounted for?

5. Automation bias

In a process where a large volume of data needs to be processed to arrive at some conclusion, using AI solutions is often a recommended approach.

However, whenever a decision making is involved, these types of recommendations can potentially prime the decision-maker and increase the chances of selecting the recommended decision as a final decision, significantly.

Did you notice how default choices in the input form of an IT system are commonly chosen as a response? Reasons could be many, maybe we are lazy, or we have decision fatigue. It could also be our bias towards the computer system, i.e. automation bias, where we assume it (recommended choice) to be the best choice due to an assumption that computer can’t be wrong. This mainly results in our mental and intellectual resignation.

Bias doesn’t come from AI algorithms, it comes from people.

I think

The typical bias that we often talk about AI is ill-defined. Our general definition of bias isn’t entirely helpful in the context of AI, where a broad set of challenges exist.

In fact, in a way, AI exhibiting the bias is a good indication and rather its strength. AI is making our bias visible to everyone. We do not need to work on AI or algorithms to remove our bias but work on ourselves, educate and clean up our head-space, AI will only reflect how we think and what we do.

Let us redefine AI bias as something we don’t need to avoid. Instead, let AI expose it. Caveat is, when exposed, we must remain accountable for it and fix it!

About the Author: I help businesses to find and solve meaningful problems, often using emerging technologies and innovative methods together. My focus is — sensible adoption of technology. I am also the author of an award-winning book on the Internet of Things.

Keen to discuss innovation, technology, and humanity, etc.? Connect with me…

--

--

Anand Tamboli®
tomorrow++

Inspiring and enabling people for a sustainable and better future • Award-winning Author • Global Speaker • Futurist ⋆ https://www.anandtamboli.com