Attrition in AI: What Can Your Organization Do About It?

Partnership on AI
AI&.
Published in
6 min readDec 16, 2021

By Jeffrey Brown

Through the story of “Jessica,” a fictional composite, this blog post series has examined the attrition of diverse talent in the field of AI. The question of what happens to workers from historically excluded groups after they’re hired is an under-examined facet of the tech industry’s “diversity problem.” It’s a particularly important one for the field of AI, which is responsible for technologies whose harms have disproportionately affected people of color.

In our first post, we met Jessica, a machine learning engineer from a racially marginalized background with a Ph.D. in computer science who recently quit her job. In our second, we learned more about her experiences as she grappled with the realities of working for a company whose stated commitments to diversity, equity, and inclusion (DEI) did not translate into a more inclusive environment. This time, we will be discussing what organizations can do to avoid repeating stories like Jessica’s.

These insights come from “Beyond the Pipeline: Addressing Attrition as a Barrier to Diversity in AI,” a forthcoming, interview-based study that is part of the Partnership on AI’s (PAI) Diversity, Equity, and Inclusion Workstream. This study specifically looked at experiences like Jessica’s, conducting in-depth interviews with more than 40 managers, people working in DEI, and folks who identified as belonging to historically excluded identities working on AI teams, and then analyzing themes from those interviews to get at the heart of the AI field’s attrition problem.

How We Developed These Recommendations

Previously, we outlined some of the themes we discovered when interviewing folks like Jessica about their experiences. Although Jessica is a composite based on those conversations, the interview study revealed that many workers belonging to minoritized identities had negative experiences when they were on AI teams, resulting in environments where they didn’t feel supported or that their values were shared by their organizations. A close examination of these themes yielded some insight as to how organizations can make their AI teams more inclusive, attract and retain talent like Jessica, and make good on their stated commitments to diversity.

The research team at PAI went through each of the themes, supported by quotes and examples from each participant. These themes, once synthesized and compared against existing literature, conversations with professionals in the industry, and considerations of the structure of organizations and the field of AI, pointed the researchers in the direction of potential recommendations to consider. These recommendations were then fine-tuned through further conversations with those in AI, especially those involved in DEI efforts. Ultimately, the study finds that organizations must:

  1. Systematically support Employee Research Groups (ERGs),
  2. Intentionally diversify leadership and managers and promote interdisciplinary teams,
  3. Have DEI trainings that are more specific in order to be more effective, and
  4. Fundamentally re-align their values to center the perspectives of minoritized workers.

Recommendation 1: Organizations must systematically support Employee Resource Groups (ERGs)

Many participants reported being involved in ERGs: organizations, run by staff members, with the expressed mission of providing a space to support workers with certain identities. Some ERGs are more general, catering to employees of multiple identities, while others are more specific, structured to support employees with specific identities such as Black@Facebook, The Disability Alliance at Google, or Women@Microsoft. ERGs tend to be inclusive rather than exclusive. The smaller an ERG is, the more likely it is to be run by employees who are volunteering their own time. Bigger ERGs tend to be better resourced by their organizations.

Study participants discussed being able to go to ERGs to both find a sense of community and support and also to find collaborators for AI projects that truly integrated diverse perspectives into the work. While ERGs were popular and almost uniformly praised by participants, it is important to note that they have potential drawbacks. Many employees do not get compensated for their work with ERGs, even though they take a tremendous amount of time and effort to operate. Employee involvement in ERGs may also detract from workers’ regular work. Organizations should provide support for ERGs, and acknowledge their employees’ work with them accordingly, while maintaining some distance. For instance, organizations should refrain from using ERGs as free labor for their own necessary DEI work.

Organizations should provide support for ERGs and acknowledge their employees’ work with them.

Recommendation 2: Organizations must intentionally diversify leadership and managers, and promote interdisciplinary norms

As discussed in our last blog post, some interviewees said they entered the field of AI to tackle complex issues affecting minoritized communities that involved AI technologies that they were very familiar with. Unfortunately, like Jessica, they reported that such efforts were thwarted by leaders who didn’t see the value in the work. For Jessica, this was mitigated by her manager, who was female and understood why she cared deeply about the issues she did.

While belonging to a minoritized identity cannot guarantee that someone will understand all of the issues affecting diverse populations, the study observed it often gave them a head start. Managers and leaders who had intentionally tried to integrate DEI practices into their teams and in their work contributed to inclusive environments and proved to be strong mentors for minoritized workers. Additionally, managers who built teams that were diverse (both in terms of these teams’ interdisciplinary backgrounds and personal identities) reportedly contributed to positive and inclusive organizational culture and AI work that was strong and interdisciplinary.

Recommendation 3: DEI trainings must be specific in order to be effective, and be less disconnected from the content of AI work

Frequently polarizing, DEI trainings can even be counterproductive. Article after article has called for the reform of the DEI trainings that workers across industries, including tech, have been required to take. Participants in the study generally expressed that it was unclear whether DEI trainings were truly effective or not. This study recommends DEI trainings with as much specificity as possible, as these could prove more effective than general and hackneyed mandatory DEI trainings.

Interviewees also said that such trainings were often more disconnected from their work in AI than they needed to be. Across settings, there have been repeated calls for the content of AI research and products to integrate more perspectives of diverse people to prevent potential harm. Future trainings could address these concerns, which are all the more urgent given the rapid pace at which AI is being developed and deployed.

There have been repeated calls for the content of AI research and products to integrate more perspectives of diverse people to prevent potential harm.

Tech companies have been inundated with best practice advice and policy recommendations on how to build more ethical AI. If AI technology continues to be built in environments that disregard diverse individuals, preventing future harms will remain a daunting task. Making AI teams more diverse and inclusive, however, may be one of the quickest fixes. (Even though it probably won’t be that quick.)

The above recommendations are meant to be guidelines that organizations can take and adapt to meet the needs of their own environments. The last recommendation of the series is perhaps the most difficult to achieve, but may yield the most sustainable change for building inclusive environments.

Recommendation 4: Organizations must fundamentally re-align their values to center the perspectives of people

One particularly painful experience participants like Jessica reported was the constant feeling that the environments they were working in were not built for them, a feeling that they had to contend with on top of every other challenge they faced. Addressing this will require a more fundamental re-alignment, one where organizations center their values on the perspectives of people.

Structural change is both the hardest to accomplish and the most important to make. Organizations must interrogate their true values — not just those they have publicly stated — and examine how much they are informed and upheld by the values of socially dominant groups. Such values can influence everything from norms around communication and how work is completed, to expectations around casual conversations. When organizations transition from expecting minoritized workers to assimilate (rather than integrate) with their teams, they can begin to truly build AI technology informed by diverse perspectives. This recommendation may be easier to adopt if the previous three are also adopted, but this cannot be guaranteed.

Structural change is both the hardest to accomplish and the most important to make.

Ultimately, these recommendations can only provide rough guidelines for how to build more inclusive and diverse teams. Organizations — and particularly their leaders — must continually challenge themselves to improve in these areas, measure their own progress, and be honest about the ways in which they really want to change. There could be a future where AI teams are better for folks like Jessica, tech companies are better at retaining their AI talent, and AI itself is better for everyone.

To be contacted about future DEI research and workshops at PAI, please join our mailing list here. Together, we can change the AI industry through equitable machine learning practices.

--

--

Partnership on AI
AI&.
Editor for

The Partnership on AI is a global nonprofit organization committed to the responsible development and use of artificial intelligence.