Ethical AI

Making Egalitarian AI Algorithms

Do machine learning algorithms exhibit stereotypes and gender biases? How to correct them?

Prateek Karkare
AI Graduate
Published in
9 min readJun 28, 2019

--

Here is a small riddle — A father and son are in a horrible car crash that kills the dad. The son is rushed to the hospital for an emergency surgery; just as he’s about to go under the knife, the surgeon says, “I can’t operate — that boy is my son!”. What do you think is going on? If you guessed that the surgeon is the boy’s gay, second father, you get a point for enlightenment, at least outside the Bible Belt. But did you also guess the surgeon could be the boy’s mother? If not, you’re part of a surprising majority. Gender biases are deeply embedded in our psyche and are reflected in our thoughts and conversations. Language is one of the most powerful means through which sexism and gender discrimination are perpetrated. Lexical choices and everyday communication constantly reflects these long-standing biases. Our writings, our movies, tweets and all the content we generate reflect these biases. Incidentally with the recent advancements in NLP, machine learning and AI these disturbing biases in our content are being unearthed by our learning algorithms.

--

--

Prateek Karkare
AI Graduate

Rapid AI | Healthcare, Software, Artificial Intelligence | Music & Food