(Wo)man vs Machine
Addressing the Gender Bias in AI
I understand the confusion people may have when we say that Artificial Intelligence (AI) is already an integral part of our everyday life. While we may not have robot chefs or automatic driving cars, that does not mean that AI is not impacting us on the daily.
When some of my friends come over and see my Google Home, they always comment that they would never want something always listening to them; they don’t seem to grasp that the technology they are worried about is already here and impacting us in ways we could not even imagine.
At its core what is AI?
Often, it still feels like the work of Science Fiction but at its core AI, according to Francois Chollet, an AI researcher at Google, is tied to a system’s ability to adapt and improvise in a new environment, to generalise its knowledge and apply it to unfamiliar scenarios.
Siri, Alexa, and Google Home are all considered narrow AI in their ability to recognise speech and execute simple commands that it has been taught or has learned how to do without being explicitly programmed. There are smaller forms of AI, or algorithms, that have already been filtered into our everyday lives whether or not you have consented to it as such.
There are algorithms that recommend you products based on your previous purchases, helping radiologists spot potential tumours in X-rays, and approve you for your credit cards or loans. While these algorithms can be considered to make your life, or the life of a customer service representative, easier, they become problematic when they are viewed as the almighty entity with all the answers.
At its core AI is tied to a system’s ability to adapt and improvise in a new environment, to generalise its knowledge and apply it to unfamiliar scenarios.
The Crossover between AI and Bias
More and more frequently, companies are enlisting the help of AI to improve efficiency and automate the repetitive working process. According to a new study by The Inclusion Initiative at London School of Economics, AI job hiring is equal to or better than human hiring managers. On average, an HR recruiter looks at a resume for 7 seconds, and AI can help make sure no great candidates are falling through the cracks. LSE’s study proves that AI hiring improves efficiency in hiring by being faster, increasing the fill-rate for open positions, and recommending candidates with a greater likelihood of being hired after an interview.
Yet Amazon’s AI recruiting software came under fire as it ended up widening those cracks; this AI did not identify a single woman as being qualified for any of its tech roles.
But what do we do when we do not even know that we are being excluded due to an algorithm? Who do you address that issue to? What do you do if the company does not realise its algorithm is biassed?
Whose fault is it: Amazon’s or society’s? The answer is both.
Society has constantly let women down with the integration of gender biases in our everyday lives. From a young age, women were told they were not good at maths or science, leading to a low female pool of talent in STEM fields, which directly correlates to an underrepresentation of women in the technology sector.
Currently, it is estimated that women applying for technical jobs in AI and data science in Silicon Valley is less than 1% and that women make up only 12% of AI researchers. And when Silicon Valley is known for its toxic culture towards women, can you blame them for not wanting to walk into this scenario, knowing you will be discriminated against no matter how hard you work?
Except now we know what is at stake; the lack of women in the tech industry is one of the main driving factors of the harmful gender biases in the technology of today. But it doesn’t have to be the case in the future.
Even though it seems that AI has a mind of its own, it reaches its conclusions based on the data it is being fed. Amazon fed its recruitment software 10 years of data of people who had been hired by Human Resources. Unfortunately, the sample that had 10 years of data on successful hires at Amazon was not representative of a larger population as it lacked well defined demographic characteristics due to the fact that most of the successful hires in this dataset were men.
Thus, the AI rejected anything that was female-related on a resume, such as “women’s chess club captain” or historically all women’s colleges. The very tool that was meant to overcome human error thus began to reinforce the idea that male candidates were always preferred. And even our language can impact gender biases when we refer to women as nurturing and men as ambitious. The Amazon recruiting tool ended up favouring candidates who had verbs more commonly found on male resumes such as “executed” or “captured”.
The promise of AI that can enable better decisions can be created, yet many academics and practitioners have warned about the biases and inaccuracies that can be embedded into this tech. To combat biases and pitfalls in social data, you must identify where the biases come from, how the biases manifest, and how this will affect the validity of your algorithm.
Olteanu, Castillo, Diaz, and Kiciman identified the issues associated with 6 types of general biases and issues that can be introduced while collecting data. With this analysis, we can understand that in the case of the Amazon AI recruiting software, it ended up creating a population bias, a systematic distortions in demographics or other user characteristics between a population of users represented in a dataset or on a platform and some target population, and an external bias, a bias resulting from factors outside the algorithm, including considerations of socioeconomic status, ideological/religious/political leaning, education, personality, culture, social pressure, privacy concerns, and external events.
This shows us that algorithms cannot be exempt from the same level of regulations and transparency that other products are required to have.
To combat these pitfalls, companies must critically examine the datasets and models to ensure biases are being addressed. As well, data for algorithms must be broad and cross cutting between different demographic, cultural, and behavioural contexts. Finally, mechanisms of transparency must be introduced into the data in order to allow for auditing at the sources. If these recommendations were implemented for the Amazon recruitment software, then the software could have been able to successfully screen candidates and identify the amazing women in tech that currently exist.
A 2020 study showed that 83% of U.S. employers were relying on some form of AI technology, with 44% of them using it to identify the best candidates based on publicly available data, like social media profiles. If we don’t address the biases that are built in AI, we will just end up hardwiring in the systematic issues of our past into our future.
Yet, the outcomes of AI do not always have to have a devastating effect on the future. A Harvard Business Review article identified four practices that can be implemented into machine learning AI to prevent and avoid gender bias.
Just like sexual harassment, the causes and solutions of AI bias are not black and white. Similarly, AI and sexual harassment policies must be effective and fair for everyone. To combat gender bias in our apps we have to increase the diversity in data overall.
By ensuring your machine learning algorithms have diversity in their training samples you are encouraging the AI to combat different biases.
Another way to decrease biases is by teaching the AI to measure accuracy levels for different demographic categories to further understand when one is being marginalised.
Sometimes you also have to recognise that there was a mistake and account for it. However, you can also “apply modern machine learning de-biasing techniques that offer ways to penalise not just for errors in recognising the primary variable, but also additional penalties for producing unfairness”.
Here at Metta Space, we do our best to ensure the data we gather is inclusive of the broader representation of society allowing for survey participants to identify a variety of genders and sexual orientations to understand how sexual harassment affects different groups.
By harnessing the power of an NLP algorithm through our diverse data collection and research, Metta Space will be able to help companies identify and tackle sexual harassment. Diversity should continue to be celebrated and at the centre of our policy initiatives moving forward, whether that is AI or sexual harassment monitoring.
Written By: Polly Kyle, Research Consultant at Metta Space