Companies Need to Mitigate A.I. Bias

Ashley <3
The Startup
Published in
7 min readNov 17, 2020

Artificial intelligence is revolutionizing many industries today such as: healthcare, hiring, and automation, which tends to earn praise, but such praise causes the issues of bias within A.I. software in companies to be overlooked.

Bias within the software can instantiate plenty of problematic situations including, but not limited to: sexist hiring, racism within the healthcare system, and targeted groups based on stereotypes. In a way, the bias that exists within A.I. is what makes the models human-like, and the people who create the models are a big factor in a model’s decision-making process.

In our daily lives, A.I. does hold significant power, and this power will continue to increase as technology advances. Although A.I. software is only effective as the data it is trained on, and bad training data can lead to bad decision making, ultimately creating an outcome that makes incorrect decisions. If companies do not take actions to mitigate bias as much as possible, A.I. software in companies can detrimental.

So what is A.I. bias?

A.I. bias is a situation that occurs when an algorithm produces results that are systematically prejudiced, this will occur as a result of cognitive biases or data flaws.

There are 2 components of A.I. bias:

  • Cognitive biases: When the bias is implemented based on the person who built the model. This can happen when the creator of the model favours a particular group over the other, or when a training data set includes those biases.
  • Data Flaws: Incomplete data may not be representative, which could result in biases. Outdated data can also result in unfair biases within the model.

Hiring Bias At Amazon

Amazon implemented an A.I. recruiting process in 2014. The project was based on reviewing the applicant’s resumes and rating their applicants through A.I. powered algorithms to create a more efficient hiring system.

Unfortunately, in 2015 Amazon figured out that their A.I. system was rating candidates with a bias, showing bias against female applicants. How did this even happen? This old training set was a huge contributor to the bias, because the dataset was 10 years old, and at the time there was a huge male dominance across the tech industry and men were forming 60% of Amazon’s employees. As a result of the bias in the data, the A.I. was taught to prefer male candidates over females. The model would penalize resumes that included the word “women’s,” and fortunately as soon as Amazon discovered this bias in the A.I., they stopped utilizing the algorithm.

Racial Bias in Healthcare A.I.

A recent study in 2020 by ScienceMag.org, proved that a common healthcare risk-prediction algorithm developed by Optum demonstrated racial bias. The unfortunate part is that this algorithm had been used on around 200 million U.S. citizens (Analysis took place in September 2020).

The purpose of this algorithm was to determine which patients would benefit from additional medical care, but after careful analysis of the model, researchers found that it significantly ignored the needs of black patients. Why did this even happen? The creators of the algorithm used previous patient’s healthcare purchases as a guideline for medical needs. This flawed the model because historical data shows that income and race are highly correlated metrics, and making assumptions based on one variable creates a huge bias which outputs very inaccurate results.

Cosmetic ads were being advertised to women on FaceBook based on the stereotype that females value physical appearance as high importance.

Biased Advertisements

Human bias can additionally be apparent in tech platforms, and a particular situation of this is seen in advertisements.

Many social media platforms and websites track data to later train models, leading to more biased machine learning models.

Quite recently in 2019, Facebook allowed for advertisers A.I. software to deliberately target audiences on the basis of gender, race, and religion. Women were being targeted in job advertisements related to secretarial or educational work, in contrast to other job advertisements such as a taxi driver role, mostly being shown to men, in particular men who were minorities.

Obviously, people were outraged and critiqued FaceBook’s questionable decision, and due to the backlash, recently Facebook altered their policy to no longer allow employers to target factors such as age, gender, or race within their advertisements.

Chatbot Show’s The Consequences of Negative Human Influence

March in 2016, Microsoft's Twitter chatbot, Tay, revealed how an A.I. can be trained to be very biased. The purpose of Tay was to engage people in dialogue through tweets and direct messages, and Tay played the persona of a teenage girl. The goal was to release Tay to the internet, let the bot discover patterns of language through communication, which would be used in her future conversations. Eventually, the creators hoped for Tay to sound like someone on the internet.

In the beginning, Tay was harmlessly engaging with her followers, using comedic and harmless jokes. Within a few hours, this whole demeanour changed when Tay started tweeting extremely offensive things including: “I f@#%&*# hate feminists and they should all die and burn in hell”.

16 hours later, Tay had tweeted over 95,000 times and the majority of those tweets were abusive and highly offensive. As Tay went viral, the outrage was expressed and Microsoft had no choice but to deactivate the account.

What was thought to be a harmless and fun experiment went wrong and many different ways. Is the internet really that racist, misogynistic, and anti-Semitic?

After investigation, it was revealed that 4chan, a troll platform, shared a link to Tay’s account and encouraged users to flood the bot with racist, misogynistic, and anti-semitic tweets.

The experiment of Tay, may not have been successful in fulfilling Microsoft’s goals, but the failed attempt goes to show how easily A.I. can be trained to exhibit bias during decision making.

Can A.I. be unbiased?

Technically an A.I. model can be unbiased based on the quality of its input data. If the dataset is cleaned from conscious and unconscious assumptions on ideological concepts including race, gender, religion, age, and other biases, you can have a model that can make unbiased data-motivated choices.

Although, in the real world, we shouldn’t expect A.I. to ever be completely unbiased. A model is as good as its data, and the creators are the ones who gather and generate data. There are a large variety of human biases, and as society progresses, more new biases are created. Additionally, some bias is unavoidable and unnoticeable. As a result of the impossible idea of having a completely unbiased human brain, an A.I. model is therefore very unlikely to be completely unbiased.

Despite the fact that we cannot really have an unbiased A.I., there are ways that companies can minimize biases within an A.I. system:

Source: McKinsey & Company
  1. Understanding the model and data to observe where there is a high risk of bias.
  2. Creating a process that consists of technical, operational, and organizational strategies to mitigate biases:
  • Technical strategy: includes tools that identify risks of potential biases, and will reveal the traits within the dataset that affects the output accuracy of a model.
  • Operational strategies: includes improves the gathering of data collection using third-party auditors.
  • Organizational strategy: having an environment including transparent processes and metrics, that are presented without any specific favour.

3. During the process of identifying biases within training data, consider how human-influenced factors could be improved. Often, when building a model, creator influenced bias during the building process is often ignored or unnoticed. Model evaluation can highlight unnoticed biases, and while building these models, companies can recognize biases and use the knowledge to understand the origins of the biases. Through changes in training, process designs, and cultural changes, the source of bias can be reduced.

4. Don’t use machine learning models for every task. Decide when automation is better, and when humans should be involved.

5. Research can minimize biases within datasets. It is effective for companies to hire ethicists, social scientists, and experts who understand the specific application area, so that recognition and strategies can be developed to mitigate bias.

6. Diversifying the A.I. community to ease identification of biases. Having a team that consists of a variety of different backgrounds can help to recognize bias issues that non-minority people may not notice. Essentially a diversified A.I. team can help mitigate unwanted and ignored biases.

Action Must Take Place

Artificial intelligence is a powerful tool, but only if utilized correctly. Companies need to pay more attention in regards to bias within their A.I. software, or else more harm than good will be done. If biases are not mitigated, hurtful results will occur, and A.I. will be inefficient, the opposite purpose of A.I. in the first place. By taking steps to avoid biases within the training process of a model, companies can avoid hurtful output that may occur as a result of bias, and A.I. software can be utilized to its full potential.

Contact me for any inquiries 🚀

Hi, I’m Ashley, a 16-year-old coding nerd and A.I. enthusiast!

I hope you enjoyed reading my article, and if you did, feel free to check out some of my other pieces on Medium :)

Articles you will like if you read this one:

💫 How I Made A.I. To Detect Rotten Produce Using a CNN

💫 Detecting Pneumonia Using CNNs In TensorFlow

💫MNIST Digit Classification In Pytorch

💫Spotify Always Knows Our Music Taste

If you have any questions, would like to learn more about me, or want resources for anything A.I. or programming related, you can contact me by:

💫Email: ashleycinquires@gmail.com

💫Github

--

--

Ashley <3
The Startup

computer scientist, dog lover, peanut butter enthusiast, and probably a little too ambitious