AI-Powered ‘Genderify’ Platform Shut Down After Bias-Based Backlash

Synced
SyncedReview
Published in
4 min readJul 30, 2020

Just hours after making waves and triggering a backlash on social media, Genderify — an AI-powered tool designed to identify a person’s gender by analyzing their name, username or email address — has been completely shut down.

Launched last week on the new-product showcase website Product Hunt, the platform was pitched as a “unique solution that’s the only one of its kind available in the market,” enabling businesses to “obtain data that will help you with analytics, enhancing your customer data, segmenting your marketing database, demographic statistics,” according to Genderify creator Arevik Gasparyan.

Spirited criticism of Genderify quickly took off on Twitter, with many decrying what they perceived as built-in biases. Entering the word “scientist” for example returned a 95.7 percent probability for the person being male and only a 4.3 percent chance for female. Ali Alkhatib, research fellow at the Center for Applied Data Ethics, tweeted that when he typed in “professor,” Genderify predicted a 98.4 percent probability for male, while the word “stupid” returned a 61.7 percent female prediction. In other cases, adding a “Dr” prefix to frequently-used female names resulted in male-skewed assessments.

The Genderify website included a section explaining how it collected its data based on sources such as governmental and social network information. Before the shutdown the Genderify team tweeted “Since AI trained on the existing data, this is an excellent example to show how bias is the data available around us.”

Issues surrounding gender and other biases in machine learning (ML) systems are not new and have raised concerns as more and more potentially biased systems are being turned into real-world applications. AI Now Institute Co-founder Meredith Whittaker seemed shocked that Genderify had made it to a public release, tweeting, “No fucking way. Are we being trolled? Is this a psyop meant to distract the tech+justice world? Is it cringey tech April fool’s day already? Or, is it that naming the problem over and over again doesn’t automatically fix it if power and profit depend on its remaining unfixed?”

Last month, the Director of Machine Learning Research at NVIDIA and California Institute of Technology Professor Anima Anandkumar tweeted her concerns when San Francisco-based research institute OpenAI released an API that runs GPT-3 models which she said produced texts that were “shockingly biased.”

OpenAI responded that “generative models can display both overt and diffuse harmful outputs, such as racist, sexist, or otherwise pernicious language,” and that “this is an industry-wide issue, making it easy for individual organizations to abdicate or defer responsibility.” The company stressed that “OpenAI will not,” and released API usage guidelines with heuristics for safely developing applications. The OpenAI team also pledged to review applications before they go live.

There is an adage in the computer science community: “garbage in, garbage out.” Models fed by biased data will tend to produce biased predictions, and the concern is that many such flawed models may be turned into applications and brought to market without proper review.

In the wake of the Genderify debacle, many in the ML community are reflecting on what went wrong and how to fix it. University of Southern California Research Programmer Emil Hvitfeldt launched a GitHub project, Genderify Pro, that argues “assigning genders is inherently inaccurate” and “if it is important to know someone’s gender, ask them.”

Reporter: Yuan Yuan | Editor: Michael Sarazen

Synced Report | A Survey of China’s Artificial Intelligence Solutions in Response to the COVID-19 Pandemic — 87 Case Studies from 700+ AI Vendors

This report offers a look at how the Chinese government and business owners have leveraged artificial intelligence technologies in the battle against COVID-19. It is also available on Amazon Kindle.

Click here to find more reports from us.

We know you don’t want to miss any story. Subscribe to our popular Synced Global AI Weekly to get weekly AI updates.

--

--

Synced
SyncedReview

AI Technology & Industry Review — syncedreview.com | Newsletter: http://bit.ly/2IYL6Y2 | Share My Research http://bit.ly/2TrUPMI | Twitter: @Synced_Global