7 Ways to Make AI Work for Women:

Imani Razat
FemTech Weekly
Published in
7 min readMar 16, 2023

--

illustrated women with AI facial recognition brackets, data points, a robot head.
illustration by Imani Razat with Dall-e mini

With AI technology becoming increasingly part of the conversation across industries — from technology to healthcare, there’s both a real sense of enthusiasm for AI — bravo that AI can aid in detecting breast cancer — and there’s equal parts sobering aprehension, the kind we find historically with the introduction of any new technology. In the 19th century for example, with the advent of steam engines, “experts” feared that women’s uteruses would fall out if they rode them. AI systems much like steam engines hail from an industry dominated by men and like the trains, everyone’s not invited on board, at least initially. Women make up only 26% of the AI data and the AI workforce and while there is reportedly an increase in women founded AI startups, they comprise about 15% of total funded ventures. With this in mind, it makes sense that women are more concerned than men about the use of AI in daily life. Is it even built for us? I spoke with my husband Kaza, an AI technologist at AWS, who has been in the responsible AI development and usage space for the last decade, and asked what we should be doing to make sure that AI works for women. This is what he said:

1. Address biases in AI labeling

There’s the biological aspect of womanhood, the physiological makeup and there’s also the societal gender role, the female gender. These are the two areas where AI can be biased against women. AI that is biased towards the female gender starts when societal bias is carried forward into the data, then into the systems. Without people in Big Tech and at large universities conducting ethical AI research, thinking about these biases, and actively trying to identify, resolve and mitigate them, these systems can have biased outcomes. Of course AI systems only make predictions, not decisions. The people using these systems must be aware of any bias in their predictions.

In terms of the biological aspect, there was a recent report of women’s anatomy being classified as inappropriate by AI in social media. This type of bias against female anatomy in photographic images can point to the systems that create and train algorithms and AI systems. Large amounts of images are put in front of “labelers” or “annotators,” humans tasked with the job of labeling data as part of the AI training process.

These humans have cultural biases, as a result, if you’re on Instagram, or Pinterest, or any of these other systems, and you’re a woman and you post a skin-heavy image of yourself, that image is going to be flagged. Your account might ultimately be locked or banned, whereas the same thing will not happen to similair images of men. Even if you had an equal number of men and women doing the labeling, think about it, if an image of a topless woman and then one of a man are presented, many cultures would flag the female image as inappropriate.

2. Double-check bias in predictive algorithms

A few years ago when the Apple Card, issued by Goldman Sachs, was introduced to the market, Steve Wozniak noticed that he was offered a credit limit that was substantially higher than his wife’s credit limit. This was due to biased algorithms behind the scenes, it was not necessarily AI. Still, this is an important case because AI is a series of predictive algorithms. Bank algorithms decide who gets loans, who gets credit, they’re very much in the same kind of realm, just not trained on as much data as a typical AI system. The point is, bias in AI is about bias in algorithms. Even when the bank accounts might be the same, and the card is coming from the same household, the algorithms made a biased determination based on something they’d seen. It could have been based on gender, age, zip code or education, and they decided that she was worthy of less credit than her husband.

3. Don’t be afraid to use AI

The sobering part is that if you don’t use the tech, if you don’t contribute data, then it may not work as well for you. It’s not going to hurt you, but it’s not necessarily going to help you. It’s going to work for other people quicker because they’re contributing data. A lot of minorities and members of under-served communities have in some cases substantiated, but in other cases, unsubstantiated fear of data collection. If you’re interested in fair AI predictions, it means that you should consider contributing data. AI predictions are unfair when there isn’t enough data about a particular group. People who are willing to donate information are the ones who may benefit the most. The only way to combat bias in communities who are victims of it is to aid in the responsible collection of information so that these systems are more balanced. Otherwise, the system doesn’t know who you are, the errors of the system are going to be disproportionate to those groups who are not represented in the data that’s used to make the AI smart.

4. Remedy imbalances in AI data

There was a case where Amazon had to scrap an AI system that showed bias against women in hiring. They were using AI to automatically scan and read resumes and because in a male dominated industry, the number of coveted employees may skew male. If they train an AI to determine what the best engineering candidate looks like based on resumes and the employee base, the data is going to be unbalanced, it’s going to skew male. You can fix this by injecting more data. One method is called synthetic minority over sampling, and it’s almost like an affirmative action for data. I got all this data and it’s like 75% male and 25% female, I can cut 25% of the male to get it to 50% and I need to figure out a way to add another 25% to the female and get that to 50%. And you balance out that data. I’m sure it’s not as simplistic as this for very complex systems but it at least points towards one solution at achieving more balanced inputs, which leads to more fair predictions.

5. Educate yourself about AI technology, and demand fairness

I think it’s important to ask these questions. The more women and minorities who understand the space, who can hold Big Tech accountable, the better. There are a ton of women doing just that, we have Timnit Gebru who speaks out against bias in AI, and Joy Buowamlini who amplifies the harm that comes from AI bias. There are more women who are coming out and asking tough questions. With people watching, the hope is that there’s going to be more transparency about how these systems are trained. If people don’t see that transparency then they should ask for it and hold companies accountable. We need to be asking:

  • Did you collect a balanced amount of data?
  • Did you have an equal representation of people within the labeling workforce?
  • Are the people who label this data fair?
  • Did they have enough men and women looking at the same things to help eliminate some of these biases?

If the answers are no, then perhaps you don’t patronize those systems. You say okay, this is not meeting the quality standards that we should set as a society.

6. Don’t use AI blindly, and then speak with your wallet

Before ChatGPT, everyday people were passive consumers of AI. There’s AI being used on all the social media sites, there’s AI in web search, there’s AI in banking and AI is used even at the supermarket at the checkout counter, there’s AI all over the place. Now that ChatGPT is here, there’s choices on the market, the consumer can speak with their wallet and decide. I think companies are terrified of the negative backlash and PR that comes from being seen as unfair. It’s in their best interest to make this technology fair. Sometimes they don’t fully know how unfair the systems are until people expose it. We must interrupt conscious and unconscious bias where we see it. If you discover a bias in an AI system that you’re using, then speak up about it. Actively look for these biases, continue to use your voice and speak your truth and get the message out there. Unfortunately, sometimes this costs people their jobs but in the long run, history will look back and acknowledge the heroes who were brave enough to stand up for their principles.

7. Become an AI Femtech entrepreneur

There should just be more women going into the AI space, trying to develop it, understanding what it does, and using it to solve real-world problems and issues. You can’t expect men to develop women-centered AI because they may not understand the issues. How can you solve the problem if you don’t even see the problem and if the issues aren’t even on your radar?

Imani Razat is a Seattle based writer and communications consultant.

Disclaimer-Kaza Razat’s opinions are his own and do not reflect those of his employer or his wife ;)

--

--

Imani Razat
FemTech Weekly

writer, communications consultant-FemTech & health disparities