Many people believe that machines and robots are perfectly impartial. Without the trappings of human emotions, they are never affected by bad moods, swayed by empathy, or influenced by social bonds. As such, the decisions that they make will always be driven by logic and reason. But the truth is, machine learning algorithms are just as prone to biases as humans are. Remember, your AI is only as good as the data it is trained with — and sadly, they sometimes end up reflecting existing human biases in the world today.
To understand how biases can happen, you’ll first need to understand the machine learning black box.
Machine learning and the ‘black box’
With machine learning, your AI will be able to make sense of a large amount of data, identify patterns, learn from experience, and make better decisions. Machine learning unlocks an exciting world of possibilities for AI as it allows it to complete more complex decision-making with greater accuracy and consistency.
However, there’s one downside to machine learning — it’s impossible to know how the algorithm really works because the AI is learning and making decisions on its own. Because of machine learning, it’s easy for biases to creep in; because of the ‘black box’, there’s no easy way of fixing it.
How does AI bias happen?
There are many reasons why AI bias happens. Here are some examples.
1.Imported from humans
All humans have biases, whether cognitive, conscious, or otherwise. We often misuse information when we make decisions or rely too much on heuristics. In our daily lives, these are innocent biases. However, if we do not catch and correct these biases, they will eventually be deeply embedded into AI — which will compound the biases at a much larger scale.
For instance, if we train an AI model based on data that contains biased human decisions in an inequitable society, the model will be just as prejudiced as society.
This will result in discriminatory and socially unacceptable decision-making by the AI. One notable example is the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) programme. COMPAS uses machine learning to predict the likelihood that a criminal will re-offend, but a ProPublica study found that the predictions by COMPAS are biased against black criminals.
2. Data collection or preparation
Bias can also be introduced at the data collection or preparation stage, especially when the dataset is not representative. Abhinav Dadhich, Senior Data Scientist at ABEJA Singapore, elaborates: “Take for example a facial recognition tool that recommends products based on age. Your training data needs to cover not just a wide range of ages, but also a wide range of races. If the model is trained only with Asian datasets, it would not work for Caucasians as their facial features are different. This may result in adverse effects if the recommended product is unsafe for their actual age.”
3. User-generated data
The AI can also pick up biases when user-generated data is used to create a feedback loop. One horrific example is Tay, Microsoft’s Twitter-based chatbot. Tay was designed to learn from interactions with other Twitter users (i.e. user-generated data). The learning worked, but within 24 hours, Tay was tweeting racist remarks and Microsoft had to shut it down.
How can you mitigate AI bias?
Preventing and mitigating AI bias is a process that requires active and constant effort. And it’s non-negotiable if we want to utilise AI as a force for good.
It’s far too easy to blame the machine learning ‘black box’ for your AI going rogue. However, if we wish to create a better future with AI, companies must be held to a higher standard. It’s important that we use AI to reduce bias, not exacerbate it.
In other words, we must accept responsibility for our AI models — not just for our business objectives, but also for ethical ones that impact the wider society. David Bergendahl, Head of Business Development at ABEJA, explains “Just because you can, doesn’t mean you should. Beyond looking at the domain of data gathering and privacy, companies should look at considerations to ensure that models can adhere to this standard of fairness.”
2.Create fairness metrics
There are many ways to define fairness, and people have different ethical standards. But as a data scientist, Abhinav’s view is that “it’s important for companies to take a step back to evaluate the ethical concerns and shortcomings of every project.”
With that in mind, every AI project should involve the creation of fairness metrics that will help to ensure that the AI model remains unbiased. One way to do that is to work with an AI solutions provider like ABEJA Singapore to evaluate the training datasets, identify potential areas of bias, and finally, to come up with fairness metrics.
3. Incorporate human judgement
It’s impossible to create fairness metrics that take into consideration the nuances of the social context in every situation. That’s why your fairness metrics should always be augmented with human judgement. David shares this view: “It’s important for companies and governments to realise that human judgement is necessary. We should draw on many disciplines and diverse groups to develop standards so that humans can deploy AI with fairness in mind.”
These standards can include early discussions about the AI’s potential for predatory behavior (e.g marketing to young children), as well as ongoing impact assessments and audits to check for biases both before and after the AI is deployed.
Ultimately, companies should adopt transparency in their agenda for using AI, and empower their people to set fairness standards, apply statistical data, and determine if the AI is suitable to be deployed.
Worried about bias creeping into your AI? Claim your complimentary consultation session with ABEJA Singapore today and find out how we can help you build and maintain a fair and ethical AI model.