Algorithmic Bias In AI

Pulkit Jain
5 min readDec 4, 2021

--

A computer can make a decision faster, does it really make it fair??

Artificial Intelligence has evolved radically since its inception in 1956. From text prediction while I am writing this article to driver-less cars, AI has insured its permanence in our lives and is no more only a term in sci-phi movies and books. The major reason for its massive use by people is the instant accessibility of the software, low cost, enhanced security and fast processing power. People tend to believe that the results they are getting from those AI models are correct, but often they are not.

Just as humans make decisions based on the conditioning of their minds over decades according to their respective situations, AI also works in an analogous manner. And as humans can be biased and error-prone, AI is no objection to it. The algorithms that are used to train AI models can be subject to a considerate amount of bias.

So, what is bias here referring to? Algorithms can discriminate one category over others based on some categorical features, such as race, color, gender, nationality etc. It is the effect of the gradual training of the AI models over such biased data. This is what algorithmic bias in AI means.

As you might have guessed, such bias in algorithms do create some problems, and sometimes they may also create havoc. It is generally the effect of the major influence of AI on sensitive issues, such as political propaganda, religious beliefs and online advertisements. Let me give you some examples: -

1) Suppose you applied for a job in a big tech company, and you got rejected. If the rejection was genuinely by a recruiter, then it’s all right. But what if it’s not?? If the ATS rejected you based on some bias?? Amazon’s ATS reportedly rejected potential women applications. Since the tech industry was dominated by males, and due to the ATS’s training on that historical data, it grew a lot biased towards men than women. Amazon says that tool has never been used and was non-functional due to several other reasons (If you want to believe it).

2) You might have probably heard about Donald Trump’s political propaganda on Facebook, where it showed ads of Donald Trump to people on their Facebook timeline. This is really a major issue because it has the potential to change the decision of who will run the world’s richest economy and a superpower too.

3) Suppose the government installed facial recognition cameras in a terrorist prone place on the Independence Day to ensure security of the place, and what, if due to the algorithm bias, the camera identified you as a potential terrorist and buzzed the alarm?? That can really cause some problems in your peaceful life.

So these are the reasons that we should take algorithm bias seriously and do something to improve it.

Techniques to reduce algorithmic bias

  1. Diversify your team

Besides, the data and algorithms used, engineers and researchers collecting the data and building the AI models are also responsible for the resulting bias. This is because collecting data is more of a matter of perspective than rational decision making. For ex. If a team of researchers consists of a large majority of white men, then it is bound to happen that the resulting data will be discriminated against women and black people. This is because people with the same background when working together are often less empathetic towards people who face problems due to discrimination. A study at University of Columbia found that “The more heterogeneous the engineering team is, the less likely it is that a prediction error will happen”. So, the institutions and organizations collecting data should make a team of people consisting of a range of different education, experience and background. They should bring together people such as data scientists, lawyers, accountants, business leaders, historians, sociologists, statisticians, mathematicians, ethicists etc. Everyone will have their own perspective of bias and they will make sure to diminish it.

2. Identify Vulnerabilities

Companies making AI models should identify their customers and the different bias they are vulnerable to. Organizations such as banks, educational institutions, law firms, hospitals are all vulnerable to different bias. Some could be mild, while others could be much more severe. Identifying the unique vulnerabilities of each sector will help engineers to focus on the things that cause most damage and will work to minimize it. While doing so, companies should also calculate the financial, operational and reputational risks.

3. Control your data

The traditional controls are probably not robust enough to detect the particular issues that can cause artificial intelligence distortion. Special attention should be given to biased correlation in datasets. Data that is acquired from third parties should be validated carefully against all biases before using. Historical data has more chances to cause bias, so it should be handled with caution too. “Synthetic Data” could be used very well to fill gaps in data sets and to reduce bias.

4. Independent and continuous validation

Just as any other major task, companies should have an additional independent internal team whose task is to verify that the data collected is free of bias. Or they could employ any third-party services to continuously monitor the data collected. Apart from that, organizations can also use specialized software to automate this task.

How your contribution matters

As many of the readers of this blog might be data science enthusiasts or might be working in the AI field, collecting data, making machine learning models, and gradually making software out of those models, I urge you to take time and explore further on this matter. More and more people should be aware of this flaw in algorithms, so that they can not only prevent themselves from it but also contribute to mitigating it. The solution starts with you, and you have a crucial role to play in it.

Thanks for reading.

--

--