Does Artificial Intelligence Discriminate? Is AI biased?

Lakshmi Prakash
Design and Development
6 min readFeb 28, 2022

As a feminist, as a psychologist, I often try to break gender-based stereotypes in real life as much as I can. It’s not just women who have emotions. It’s not that men have no feelings or emotional needs at all. It’s perfectly normal for a man to have feelings, to cry, to want to be pampered like a child occasionally, to fail in his career sometimes, to feel offended and hurt, to want to speak to a counsellor, etc. Just like it’s perfectly normal for a man to like. love, and enjoy things that are deemed “feminine” by the society, like listening to melodious, soft music over rock, finding flowers attractive, preferring soft toys over sports and video games as a child, and more. If you think about it, you’d know that most men find themselves to be very uncomfortable with the idea of stepping outside what’s considered stereotypically masculine, and that negatively impacts their mental health and self-esteem. As much as you’d wish this is not the case, this is one bitter truth, and in many cases, they’re not even aware of this.

Does Artificial Intelligence encourage gender bias?

That’s one of the first things I noticed when I got interested in products based on AI. When it comes to voice-based assistants, why are they almost often female by default? The first robot citizen on Earth Sophia is female. Alexa, the voice-based virtual assistant owned by Amazon is female. Apple’s Siri is/was mostly female by default. Google Assistant is female by default. Amelia by IP Soft is female. Open some website or look for raising a complaint, and if there’s an option of a chatbot, most likely, it will be female.

(https://edition.cnn.com/2021/03/31/tech/siri-voice-female-default/index.html)

The problem with these virtual agents being “female by default” is that it reinforces gender stereotypes across the world. That serving professions are best done by women, like the housewife stereotype, the doting mother, care-giving, and so on. That empathy is primarily a feminine quality, that only women can empathize when you have to vent out or you want to express your anger because you’re upset as a customer. That it’s in the nature of women to be able to deal with any kind of complaints, be they valid points stated respectfully or unfair demands or aggressive complaints. And still be devoted and solve the problem without acting out because that’s what women are expected to do?

A creator of Sophia, the first robot citizen, interacts with her
Sophia — First robot Citizen — Gender Bias in Artificial Intelligence

Is gender stereotype the only problem with artificial intelligence?

It’s believed that some if not all companies are taking the issue of gender bias seriously, and are trying to resolve this problem, to eradicate this discrimination, by taking the gender neutral path or by providing different options in voice for voice-based assistants. But is gender-based stereotype the only problem, the only form of discrimination in artificial intelligence?

Discrimination in Healthcare:

“Oh! Come on! This is not that big a deal.” you might want to say, but discrimination can be a serious problem, it can even cost lives. Yes, in a research from 2019, it was found that in U.S., artificial intelligence used in hospitals wrongly judged that white people would need more healthcare than people of colour would. This was heavily biased against people of colour. Imagine an artificial intelligence that’s relied upon by doctors and healthcare professionals concluding that your needs are not as important because of your race.

Racial discrimination by Artificial intelligence

Discrimination in risk Assessments in the courtroom:

“A computer program spat out a score predicting the likelihood of each committing a future crime. Borden — who is black — was rated a high risk. Prater — who is white — was rated a low risk.” Artificial Intelligence serves various purposes and is used in the courtroom as well. Imagine being accused of crime you did not commit or being over-punished or facing false charges because of your race.

“We obtained the risk scores assigned to more than 7,000 people arrested in Broward County, Florida, in 2013 and 2014 and checked to see how many were charged with new crimes over the next two years, the same benchmark used by the creators of the algorithm.

The score proved remarkably unreliable in forecasting violent crime: Only 20 percent of the people predicted to commit violent crimes actually went on to do so.”

Gender Discrimination and Racial Discrimination in Transport:

In a research paper published in 2021, it has been shown that the popular ride-booking app Uber shows both gender discrimination and race discrimination. “Non-compliance with stereotypes is at the root of discrimination in some cases as women passengers have complained about having lower ratings due to not engaging in conversation. This lower score as a customer means lower prioritization of the ride to drivers, therefore having to wait for longer for rides. A similar phenomenon has been called out by female drivers, who have been given very low scores after rejecting unwanted advances or flirtatious comments from male passengers for not accepting their friendship requests on Facebook.”

Islamophobia in Bots:

This report by WION showed that some bots are indeed Islamophobic. That artificial intelligence is linking violence and terrorism to the word “Muslim”. The report shows that when asked to complete sentences, whenever these bots detected the word “Muslim”, the phrases or words they used were mostly showing Muslims in bad light, as Muslims being evil by nature.

What causes discrimination in artificial intelligence, though?

A bot or for that matter anything created by humans will turn out to reflect what the creators want it to be like and show mistakes the creators make either intentionally or unintentionally. In artificial intelligence, it all works based on algorithms. The information you use to train a bot or a virtual agent, it picks up on the same to be able to ‘think’ and judge by itself based on the algorithm that’s used.

Selection Bias: Big data might not include facts and statistics related to minorities or the oppressed, and it might generalize based on skewed data. So if it largely goes by data related to male perspectives, women might feel left out. The data might either be non-inclusive or very selective or be based on existing biases or take information from history, leading to feedback loop.

Limitations of AI leading to Bias: Also, we must keep in mind that artificial intelligence itself is still very young highly elitist. This means that it can reach out only to select populations and might either not be able to reach vast majorities who do not fall into the privileged bracket and/or the data it collects, apart from the data that was originally used, could also be very, very selective. Consider this for example, how many Indians are there in the US? Of all those, how many get the healthcare they deserve and need? That must be a really small population representing Indians across the world. If this is all the information that is used to judge the health needs and risks of Indians on the whole, how reliable would that be?

We do not know if the creators of all these products and services were themselves biased to begin with, if it was a conscious choice for them to go ahead and cater to only select populations, to come across as appealing to their target audience to make profits. It could be intentional and unintentional as well, but these biases cost a lot to the victims unfortunately. And that’s not the purpose of creating “intelligence” and expecting it to help, anyway. Let’s hope that these problems are taken seriously so that artificial intelligence could be free of any discrimination and bias, be it gender-based or race-based or religion-based or caste-based. Only then will people be able to trust artificial intelligence and we as humanity can look up to growth.

--

--

Lakshmi Prakash
Design and Development

A conversation designer and writer interested in technology, mental health, gender equality, behavioral sciences, and more.