Computer Vision Models — Stop Labeling Gender.

Alexandria Storm
4 min readNov 3, 2021

We must proactively consider how technology affects the LGBTQ community, especially those who are trans, non-binary, intersex, and others who do not fall under cis. When addressing inequalities in gender and race in technology, we must continue to be intersectional in considering the needs of all marginalized communities.

If we treat gender as binary classification, we are dangerously upholding Cisnormativity.

Cisnormativity: A discourse based on assumption that cisgender is the norm and privileges this over any other form of gender identity.

Cisgender: Denoting or relating to a person whose sense of personal identity and gender corresponds with their birth sex.

In recent years, Computer Vision models have been applied to an increasing amount of areas — from law enforcement facial recognition to analyzing medical scans. Many major tech companies have implemented classification models that can identify the contents of an image. ImageNet is one of the most well-known visual databases; it contains millions of images used for object recognition research. This includes categories that allow for classification to be possible, such as dogs and cars. A classification model can label images into predefined classes. For example, if a…

--

--

Alexandria Storm

Microsoft Artificial Intelligence Bing Video Search — UC Berkeley Alumn— WoC 🏳️‍🌈 — Google CodeU — Lesbians Who Tech Scholar — Anita Borg Scholar