The Vital Role of Women in Ethical AI Development

Beatrice Murage
Women in Technology
5 min readMay 7, 2024
Photo by julien Tromeur on Unsplash

In this digital age, Artificial Intelligence has slowly become everyone’s guide to success, where we stop using our own intelligence and borrow that of these man-made systems to navigate our lives. Worried about an upcoming interview? Ask AI. Wanna try a new recipe? Ask AI. Planning to send a love note to your partner? Ask AI. However, who is managing the AI? We must unveil the curtain on a truth that has been demonstrated beyond doubt: there is a lack of diversity in these technological advancements. From biased hiring algorithms to discriminatory healthcare practices, these teams are coming up with systems that have a natural bias against women and people of darker skin colors.

The Unseen Biases

Unbeknownst to many, Artificial Intelligence is just as prone to racial and gender bias as a society. From the existing gender and racial prejudices in the communities, the same challenges seep into the training data used in AI-generating systems that give skewed results, often not in favor of marginalized communities. A study by Joy Buolamwini and Timnit Gebru revealed that commercial face recognition systems from major tech companies like Amazon produced erroneous results for women and people of darker skin color. Research has also revealed that speech recognition systems have higher error rates when women are involved compared to men. This will inevitably lead to frustration and exclusion of these people.

Take, for example, the Amazon AI Recruiting tool. The tool was trained on the resumes sent to the company over 10 years. Since tech is a male-dominated industry, the tool was trained on this data, and it was inherently biased and unintentionally chose male applicants over female applicants. Amazon later made changes to accommodate those errors, but eventually, they lost hope in making the tool gender-neutral. This is a crucial example of how women and some minorities are being outcasted in the upcoming AI field.

AI systems learn from the data that they are fed, meaning that if data used in a facial recognition system is composed of mainly white men, it will generate inaccurate results when dark-skinned women are involved. A UNESCO study also revealed that Large Language Models associate women using keywords such as “home”, “children”, while men are associated with “career”, and “executive”. This gender bias originates from the fact that the system was trained to believe that women are meant to be in the household taking care of the kids while the men are out there working in executive offices.

When the LLMs were asked to generate a narrative of people of different ethnicities, racial bias came out. British men were given occupations such as “driver”, “doctor”, and “bank clerk”, while Zulu women were assigned roles like “servant”, and “housekeeper”. In 2021, UNESCO put forth a global normative framework called Recommendation on the Ethics of AI which calls for specific measures that include investing in targeted programmes in marginalized communities to increase opportunities for girls’ and women in STEM and ICT fields. Large tech companies like IBM have endorsed it, working to improve their AI tools accordingly. UNESCO also started a program called Women4EthicalAI, which is a collaborative platform to help governments and companies fulfill their mission of ensuring that women are not excluded in the development and deployment of AI systems.

Facial recognition technology, which was praised to improve security, has also been clocked on some inherent biases. Joy Buolamwini, founder of the Algorithmic Justice League, began research into this area because of some of her own personal experiences. While studying at MIT, she discovered that some of the facial recognition technology could not identify her until she put on a white mask. Moreover, some systems from these major companies were unable to recognize Oprah, Serena Williams, and Michelle Obama as female subjects and it identified them as male. This sparked a larger conversation on the biases in the AI systems that are being sold to governments and law enforcement authorities.

Confronting Bias Head-On

The consequences of AI bias extend far beyond inconvenience — they can often exacerbate systemic inequalities, further marginalizing already vulnerable populations. For example, a biased facial recognition system may falsely identify a suspect, which may perpetuate a wrongful conviction. The police may end up profiling the wrong individual because the AI tools could not accurately identify the right all-black man as the suspect. In healthcare, AI tools may cause misdiagnoses in patients from minority communities, worsening the racial bias in society.

The best way to deal with this issue is to acknowledge the existence of the bias. Once these organizations are willing to act on the gap, they must prioritize diversity and inclusion. During the design and training of the algorithms, the teams must ensure that they include diverse perspectives from different backgrounds and expertise. The algorithms must also be frequently and thoroughly tested to address any bias and audited to identify discriminatory patterns. A broader and more diverse perspective ensures a more thorough analysis. Furthermore, all needs of all stakeholders will be met without leaving anyone out. When teams are less diverse, they are more likely to have blind spots and unconscious biases that can negatively influence the design of AI Systems. Therefore, it serves a better purpose when the team is wholly diverse and inclusive.

The best way to deal with this issue is to acknowledge the existence of the bias.

Conclusion: Towards a More Ethical AI Future

The journey to eliminate bias in AI is filled with challenges, but it is a compulsory one that will develop a more equitable society. By identifying, confronting, and correcting these biases, we can pave the way for new AI systems that serve for the greater good of everyone, without propagating racial discrimination and prejudice. Through unwavering dedication to conscious inclusion, we can usher in a new era of gender and race-friendly Artificial Intelligence where no people are left out.

We should also aim to balance gender equality in the workforce so that the data available for the training of AI models is more inclusive of the whole population. It is no easy feat, but we must embark on this journey so that the tech industry gives products that benefit all members of society equitably.

--

--

Beatrice Murage
Women in Technology

Software Engineer with an interest in writing on interesting technology practices