Fixing Sexist AI
Published in

Fixing Sexist AI

Step 3: Measurable results

Hello! I have arrived at the third-ish sprint of this project, which is hard to believe because it also coincides with the midpoint of the term. Meaning that I am supposed to be halfway-ish done with this project? Well, this project could go on for years, but generally, I am happy with where I am. I have learned new libraries, new techniques, and this past week, as I will discuss, was all about testing which things I really need to understand to move forward with this project, and which things only need a cliff-notes level of understanding. Here’s a quick run-down of what happened:

I figured out how to integrate Gensim and FastText

During the last post, I documented my process training my model using FastText, a library designed by Facebook to train word embeddings (which is only one of the many things it can do). It’s super easy to use. You just run it through the command line, and all you need is your input data. It outputs two files, but the one of interest is model.bin, a zip file with all the vectors. The issue is that to use this model with FastText alone, I can only get my vectors through the command line, which is really inconvenient.

This is where Gensim comes in. I found this, and this to be helpful in my plight to connect Gensim and FastText. Essentially, Gensim makes it super easy to load a trained FastText model, and get vectors out of it using a dictionary. It took me 5 minutes to implement, and it made my whole week better. Thanks, Gensim!

I got a direct bias statistic for my original model

Once Gensim made it possible for me to easily get vectors out of my model, I could get to the interesting stuff: calculating how sexist my model was. Now, I know I have been fairly hand-wavey about this in my previous posts, but we’re going to go through how this actually works this time, and discuss each variable in the statistic.

Ok, so down there, below this paragraph, that’s how we can calculate the degree to which an embedding contains a particular bias of our choice. This is from Man is to Computer Programmer as Woman is to Homemaker? by Bolukbasi et al. For this example, I’m going to stick with gender because, at least in English (and in most languages), gender is incredibly binary (which is unfortunate), but this trait turns out to be helpful in this one particular scenario. An aside, it would be cool to do this project in another language, maybe one that handles gender pronouns in a very different way from English. ANYWAY, back to the algorithm.

For this example, we’re going to take a list of 10 gender-neutral occupations, and see how much sway they have on the basis of gender.

  • N → The set of gender-neutral occupation words (e.g. nurse, doctor, programmer, hairstylist, plumber etc.)
  • w → One of those members of N. This is a summation, or a loop if you’re thinking programmatically. Basically, we’re just going to look at one word at a time in the list of occupations. occupations[i]
  • g → The gender subspace. Ah yes, the elusive fox of the algorithm. Pick a list of opposing gender words (e.g. man and woman, he and she, waitress and waiter etc.) and subtract the male words from the female words (or vice versa). Then, from that set of subtracted vectors, find their principal components, and use the first eigenvalue. Sounds like linear algebra vomit, and it kind of is, but if you can learn the PCA function in the sklearn library, then you’ll be fine.
  • c → The strictness of the algorithm. This is a value between 0 and 1, and for the full year that I have worked on this project, and the four separate times I’ve calculated this statistic, I’ve always set it at 0.8. To be more specific, if c is 0, and there is no bias detected between the word and the gender direction g, then the word w has no overlap with g at all.

I did a very basic version of this algorithm for the first version of my project. I made my gender subspace with the words she, her, hers, he, him and his. And I chose a list of 10 occupations to dot with it:

occupations = ['doctor', 'nurse', 'actor', 'housekeeper', 'mechanic', 'soldier', 'cashier', 'comedian',
'gynecologist', 'musician']

I manipulated my data and retrained my model

Before I discuss what I found, I want to remind you that the objective of this project is to minimize this statistic by making clever alterations to the data I feed into the model. And while my first data manipulation wasn’t exactly clever, it was something.

I made a second model, to compare to the first. This time though, I fed it a new set of data. It’s actually two concatenated copies of the old text dataset, but in the second copy, I swapped all of the gender pronouns:

REPLACEMENTS = [
['she','he'],
['her','him'],
['hers','his'],
['he','she'],
['him','her'],
['his','hers'],
['herself', 'himself'],
['himself', 'herself'],
["she's", "he's"],
["she'll", "he'll"],
["he'll", "she'll"]
]

And so, armed with a script, and two models, it was TIME to see if the plan worked…

Holy shit! It kinda worked!

Minor minimization between my old and new models!

I don’t care who you are, the second number is less than the first. Objective truth!

Obviously. There’s A LOT to be done. First of all, the original bias statistic is teeny tiny. When I calculated this statistic on word2Vec last year (a much larger model, v v famous as well), my baseline gender bias on occupations was 0.2. There are a number of reasons why this could be:

  • My gender subspace is still pretty shitty, so I’ll bolster it with some more word pairs next time.
  • My list of occupations was much shorter when I did this most recent calculation so that inevitably lessens the effect.
  • I need more data to train this model with. I’ve had this suspicion for a while now. Starting to eat at me.

Ok, what’s next?

Well, I want to find some way to evaluate my model, see if it’s actually a good representation of word meanings. I’ve been meaning to get around to this, but I think I’ll actually tackle it sometime this week.

Plus, those philosophical questions are still lingering, so I might pose them to someone who is actually a philosopher. Lucky for me, the professor overseeing my other thesis project is a philosopher! So maybe he can offer some respite, or maybe not. Either way, I’m sure it’ll be fascinating.

I also want to get some more interesting subspaces. Maybe ones that represent race or ethnicity. This involves re-reading old literature. The linear algebra is a little more elusive here, but I have yet another contact on campus who is a linear algebra wizard! Maybe he can help.

I can use all the help I can get.

--

--

--

A computer science student’s account of writing their senior thesis, trying to make AI a little less shitty.

Recommended from Medium

What’s Wrong inside These Devices? Defect Detection in X-ray Images

Detecting Epileptic Seizures for EEG Data

Evolving agents to solve XOR in Pixling World

Confusion Matrix and Classification Report

Predicting Bitcoin with News using R

Evaluation of Natural Language Processing Tasks

Introduction to Logistic Regression

Newsletter for Machine Learning Scientists — 11th Nov, 2020

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Chloe Zeller

Chloe Zeller

Young computer scientist, studying intersection with cognitive science. Driven by intellectual curiosity and snacks.

More from Medium

LIMITS of SQL — Indeed there is a Better, oh… Best Alternative!

How To Change Python PPScore Default Model

Demystification of Cost Function in Machine Learning, Linear Regression as a Case Study

Knowledge Discovery in Databases