Why Is There So Little Diversity in AI? We Asked the People Who Left.

Partnership on AI
AI&.
Published in
5 min readSep 8, 2021

By Jeffrey Brown

According to her tech company’s most recent diversity statement, Jessica, a machine learning engineer from a historically excluded group, is just the kind of employee they’re looking for. After graduating with a Ph.D. in Computer Science from a prestigious East Coast university and interning at a different company as part of a program aimed at recruiting more women of color to tech, she was hired three years ago, consistently earning glowing praise from her immediate manager. But in those three years, Jessica has become frustrated by the uphill battle to get her work recognized and respected outside of her team. Today, despite her company’s stated commitments to diversity, she just handed in her resignation letter.

The story of “Jessica” is based on the experiences of many different individuals in the AI industry, but this composite case illustrates a previously under-examined part of the AI “diversity problem.” Often focused on the qualifications of candidates in the hiring pipeline, tech organizations have traditionally focused less on the question of what happens to individuals from historically excluded identities once they get in the door. The most recent data show that, despite diversity statements and other efforts being more popular than ever, there has been little movement on the number of historically excluded individuals at some of these tech organizations. This is particularly troubling in the AI field, which develops technologies that affect wide swaths of the population that are disproportionately underrepresented at these organizations.

Often focused on the qualifications of candidates in the hiring pipeline, tech organizations have traditionally focused less on the question of what happens to individuals from historically excluded identities once they get in the door.

A forthcoming study by the Partnership on AI’s (PAI) Diversity, Equity, and Inclusion (DEI) Workstream, “Beyond the Pipeline: Addressing Attrition as a Barrier to Diversity in AI,” specifically looked at experiences like Jessica’s, conducting in-depth interviews with managers, people working in DEI, and folks who identified as belonging to historically excluded identities and analyzing themes from those interviews to get at the heart of the AI field’s attrition problem. The forthcoming study distills these learnings into a set of insights that those working in AI organizations can take to improve upon their current DEI practices, beyond implicit bias and diversity training. This blog post, focussing on the methods we used, is the first in a series about the project and its findings. Subsequent entries will discuss the insights we gathered and the recommendations we suggest. By reading them, we hope that you will gain a greater understanding of why workers like Jessica left the AI industry and what can be done about it.

Methodology

Our team approached this project differently than traditional academic projects that value the researchers as the sole experts, regarding participants as just the subject of the study. We viewed the participants as the experts on their own lives and experiences as professionals and people. For this reason, we chose to conduct semi-structured interviews, designing them according to a literature review, multiple conversations with folks who were DEI experts in tech, and additional conversations with folks working in AI both within and outside of our Partner community. The literature review, along with these internal and external conversations, pointed us towards focusing on research questions within three domains that seemed most important. These three domains are attrition, organization culture and climate, and efforts to increase inclusion. Our team of researchers wrote several questions under these domains and conducted pilot interviews to refine them. The semi-structured interview format allowed for follow-up questions based on the answers the participants gave. While the interviews all investigated similar themes, they approached each individual differently depending on their role and set of experiences.

We recruited participants through the PAI team’s own professional and Partner networks and reached out to organizations focused on increasing the representation and empowerment of historically excluded groups in the field of AI. We spoke to 43 participants who were either managers working on AI teams, people working in DEI in tech, or folks who identified as belonging to a historically excluded identity along the lines of gender, race, sexuality, or ability.

The primary researcher then asked questions like “Have you ever considered leaving your current team? Organization? The field of AI?” and “What has your organization done to make the workplace more inclusive?”

Participants first signed a brief questionnaire answering open-ended questions about basic demographic data and professional role to assess their eligibility for the study. Next, the primary researcher explained the purpose of the study and the format via several documents, including a form for informed consent that participants signed before starting their interviews via Zoom. The primary researcher then asked questions like “Have you ever considered leaving your current team? Organization? The field of AI?” and “What has your organization done to make the workplace more inclusive?” Depending on how the participant answered, the interviewer followed up to clarify the answer, or sometimes go in a different direction than originally intended. Participants’ answers were transcribed, redacted to remove identifying or sensitive information, and sent to participants to clarify, remove, or add any additional details.

A team of researchers then analyzed these transcripts and developed themes based on the main ideas that the participants expressed relevant to the underlying domains of the study. The researchers then cross-checked these main ideas and themes, in an adaptation of the Consensual Qualitative Analysis method.

Why Did She Leave?

What PAI’s researchers found was at once compelling, consistent with previous work on this topic, and uniquely illuminating of this sub-field of the tech workplace. In our next blog post in this series, we will be going deeper into these findings, which both reiterate themes unearthed in previous studies and offer some clues about how we could tackle the challenge of attrition of historically excluded identities in AI. Why do other folks like Jessica leave and what can we do about it? Answering those questions, as you’ll learn more about next time, will take more than the quick fix of a one-off implicit bias training or a DEI book club.

To be contacted about future DEI research and workshops at PAI, please join our mailing list here. Together, we can change the AI industry through equitable machine learning practices.

--

--

Partnership on AI
AI&.
Editor for

The Partnership on AI is a global nonprofit organization committed to the responsible development and use of artificial intelligence.