Predictability of Gay.

Heather Moore
6 min readApr 13, 2018

--

Today, I came across an article in the Economist (published last September) that details some efforts in the AI world to ascertain a person’s sexuality from their pictures. The article is based on work done by two researchers at Stanford, and was published in the Journal of Personality and Social Psychology in February this year. These are all signs that point to high reputation, honesty, and intellectual rigor: Economist, Stanford, a Journal with 5+ impact factor. I decided to spend some time researching the article/authors to find out why something of seemingly little use (classifying people as gay via pictures), with great potential harm (people still get killed for being gay), was being followed and reported on by such respected institutions.

One of the first things that caught my eye was a tweet from one of the authors of the paper, Michal, pointing out that he is aware of the potential impact of this type of research:

Always good news when you see people working on AI are aware of the broader effects of their work. In fact, Michal and his co-publisher Simon bring this up directly in the paper itself, and explicitly say they are doing this research to get ahead of the potential abuse:

Finally, and perhaps most importantly, the predictability of sexual orientation could have serious and even life-threatening implications to gay men and women and the society as a whole […]The laws in many countries criminalize same-gender sexual behavior, and in eight countries — including Iran, Mauritania, Saudi Arabia, and Yemen — it is punishable by death (UN Human Rights Council, 2015). It is thus critical to inform policymakers, technology companies and, most importantly, the gay community, of how accurate face-based predictions might be.

We hope that our findings will inform the public and policymakers, and inspire them to design technologies and write policies that reduce the risks faced by homosexual communities across the world.

Additionally, their paper points out that they sought feedback from the community their research impacts:

The results reported in this paper were shared, in advance, with several leading international LGBTQ organizations.

Well, color me impressed. Those all seem like the right boxes to check, so I moved on to learn more about the relationship they found between reducing risk for homosexuals and sexuality image classifiers. According to Michal and Simon, there is currently a risk of being classified as a homosexual for nefarious purposes, without permission. To reduce this risk, their research would emphasize the ethical implications of sexuality classification by image:

this work does not offer any advantage to those who may be developing or deploying classification algorithms, apart from emphasizing the ethical implications of their work.

This is the point when I start to view the paper as manipulative. In the beginning of the article, the authors talk about different goals. Knowing that prior attempts to attribute facial features to gay men and gay women had shown mixed results, and that they wanted to to better:

This study aims to address those limitations [of earlier studies] by using a much larger sample size and data-driven methods, including an algorithm-based measure of facial femininity

Here are a few additional statements Michal and Simon made about what they hoped would come of their work, beyond improving image based sexuality classifiers:

We hope that future research will explore the links between facial features and other phenomena, such as personality, political views, or psychological conditions

It is possible that some of our intimate traits are prominently displayed on the face, even if others cannot perceive them. Here, we test this hypothesis using modern computer vision algorithms

This research is not about emphasizing the ethics of facial feature AI. The research, while claiming to work toward greater safety for homosexuals, is only very tangentially related to that.

This research is fundamentally about using algorithms to invade privacy.

This is what bias looks like in AI now. It’s getting harder to recognize because it comes with captivating and emotionally manipulative stories. People are hip to the fact that if you train a model with bad data, you get bad predictions, but that’s where it stops. When media spot concerns, they focus on the training data for models:

no people of colour were included, or individuals sitting elsewhere on the LGBT spectrum, including bisexual and transgender people. What’s more, the 50/50 split between gay and straight people in the photos is not an accurate reflection of the real world. — The Mirror UK

Of course, they still publish their interpretations of the research with headlines like “[…] artificial intelligence can tell if you’re gay or straight just by looking at a photo”. Even respected publications like The Economist follow suit with an equally sensationalist headline, “Advances in AI are used to spot signs of sexuality,” despite noting quantitatively, “The study has limitations.” What these reporters, and what every person who reviewed/funded the study are missing, is that there is no definition of “good data” here. There is no perfect set of training images for this classifier when the goal of the classifier is to invade privacy. The problem with this research isn’t the input, it’s the underlying purpose. That is the new bias in AI.

A second but equal problem is that we aren’t able to recognize and question purpose with this research. For example, the stated narrative in this paper is that the work on recognizing sexuality via images is for the purpose of helping homosexuals. It’s time that we learn how to spot and effectively question narratives like this. Bias is not just about the data we feed into systems, it’s also about the story people are telling with that data, and the futures those stories advance.

Let’s take a minute to question the stated purpose. First, who are the homosexuals that need help? According to the authors, a generic set of all of them:

[gay people’s] well-being and safety may depend on their ability to control when and to whom to reveal their sexual orientation.

And why do these gay people need help?

Press reports suggest that governments and corporations are developing and deploying face-based prediction tools aimed at intimate psycho–demographic traits, such as the likelihood of committing a crime, or being a terrorist or pedophile

Governments and corporations use tools to spot criminals and pedophiles? Definite connection there. So, how do you propose we outplay these governments so that the legal network can help our gay pedophiles? We turn, of course, to facial femininity. That’s right, this paper uses a gender classification algorithm based on 2.8 million Facebook images to determine a proprietary “facial femininity” score. That score is then used that to guess if you’re gay or not. Gay females have less facial femininity; gay men have more facial femininity. Boy, nothing wrong with that model. Let’s just push forward that gayness is a binary, and that testosterone in the womb is the reason, and that this is supportive of gays because it supports the argument that it’s not a choice. The hypocrisy of an article that purports to help homosexuals while relating them to people who do awful things and completely ignoring the performativity of gender is disgusting. It’s even more disgusting that the research made its way to publication.

Let’s do better. Sexuality is not a binary. This research is uninformed, manipulative, and successfully propagating harmful biases. The claims made in this research have been turned into unhealthy headlines in the mass media. Let’s invite some critical thinking back into our days — ask questions about the purpose of research before it begins, and stop it when there is no decent purpose, or when the suggested purpose doesn’t match the work or outcomes. What would it mean, after all, to be predictably gay?

TLDR;

I’m interested to hear from researchers on what it’s like to try to engage people outside their realm to help recognize or chew through various ethical considerations. These conversations are far from simple and I hope that in time we can get an increasingly diverse set of minds around a table to discuss and question purpose and ethics behind various technologies.

--

--