Estimating the Gender Ratio of AI Researchers Around the World

Written with Simon Hudson

Anyone in the industry or going to prominent Artificial Intelligence conferences can tell you that a gender imbalance exists, but we felt more rigorous research was important to drive the conversation forward and accelerate correcting this imbalance. As a follow up to The Global AI Talent Pool Report on released in February, we worked with WIRED who was also interested in looking more deeply at the state of diversity in the AI expert talent pool. For the article, we worked in collaboration with Tom Simonite of WIRED to delve further into the research by adding the dimension of gender and country to the original report’s data.

In our study, we focused on the 4000 researchers who have been published at the leading conferences NIPS, ICML, or ICLR (see the second half of this post for our methodology).The following graph lays out our results:

While similar studies have been conducted on the technology or startup worlds, we have not seen anyone measure the diversity in the Machine Learning research community at this broad scale. It is our hope that by knowing just how skewed these numbers are, it will be easier to see how much of a challenge the industry has to create balance. In essence, we hope this increased clarity spurs greater action throughout the Artificial Intelligence sector.

We don’t think it would be fair to publish the ratios of various countries and not share our own numbers. Across Element AI we have 32% representation of women, matched with 30% women in leadership positions. On our technical and scientific teams, the average is 21% women, with 20% in leadership positions. Diversity of course goes beyond gender, and we continue to conduct internal research on diversity and share it across the company.

The Element AI gender ratio is about double the average, and we are especially proud of the representation we have in leadership heading up large groups of people. However, we do not want to give the illusion that we’ve overcome the challenge building a diverse workplace. We think it important to continually evaluate ourselves and to support internal initiatives and diversity events that promote real progress.


Given the sensitivity on this subject, we felt it important to share our methodology for estimating the gender balance in the global talent pool.

To update the initial data set, we scraped the names of everyone presenting either at NIPS, ICML or ICLR in the last year and paired them with information pulled from Google Scholar. We then passed the list through Mechanical Turk to find their affiliation location (currently associated university or business). The list was passed three times in Mechanical Turk to ensure a high level of confidence; all cases that had variability were then checked manually in-house.

With this information, it was possible to regroup names based on geographic clusters. It is important to note that this does not inform us on the nationality of each individual, but rather gives us a weight for each institution and thus each geographic location. The reasoning is that we were not looking into individual data, but instead institutional trends.

Of the 4000 names, only 17% come from private businesses meaning that while this sample is representative of the overall research community, we would be hesitant to say it is representative of the “business research lab” sub-section. As an added point, a certain amount of noise exists in this sub-group of people working at private labs; it was not always possible to find the specific research location an individual worked.

The more complicated cases, about 12% of the business group names, were all from the big tech companies (Google, Facebook, etc.) and were put in the Silicon Valley group. We based this decision on the fact that adding them into that group did not change the men-women ratio of the region. We also have a working hypothesis that the hiring policies of these companies stay consistent region to region, but left this point to be validated in later research.

In determining gender, we kept simple binary categories for simplicity as most academic bios don’t go into any depth on the subject. Determining gender was based on the use of gendered pronouns in the authors’ descriptions to reflect self-identification by the author. When this method was not possible (about 1% of cases out of the total), an educated guess was made based on the name and appearance. While it is more error prone to not depend on self-reported gender, we determined the size of the subgroup was small enough to justify the methodology.

Special thanks to Tom Simonite, Gregg Delman, Negar Rostamzadeh and Wei-Wei Lin for the graph.

*A few plugs of events we’ve been involved in: Women in Machine Learning workshop at NIPS (2017); Women in Computer Vision workshop at CVPR (2017); Women in Deep Learning (WiDL) co-founded by our own Negar Rostamzadeh in 2016; and mentors for the MISE Ghana Foundation in 2017 and 2018.