AI Hiring Bias: Understanding, Mitigating, and Moving Forward

BTO Insights
Beyond the Obvious
Published in
8 min readJul 27, 2023
AI hiring bias and discrimination

Key Points

  • Although AI-powered hiring tools are very popular in the recruitment processes, they also raise concerns relating to hiring bias.
  • Understanding the origins of AI hiring bias is the first step in addressing it.
  • An AI system is only as good as the data it’s trained on. The data should reflect the diversity of the real-world talent pool in terms of gender, race, age, and other characteristics.
  • The consequences of AI hiring bias are not just harmful to the individuals who face discrimination but also harmful to the workforce as a whole.
  • Mitigating this issue is a proactive process that should be powered by human monitoring.

In recent years, AI-powered hiring tools have gained considerable traction. A survey conducted by the Society for Human Resource Management (SHRM) demonstrates the increasing integration of automation and AI into HR operations. This comprehensive study gathered responses from 1,688 members, with participants spanning various industries and organization sizes across the United States.

The findings revealed that nearly one in four organizations currently leverage automation and/or AI to facilitate HR-related activities. This adoption rate was higher amongst larger organizations, with 42% of extra-large enterprises (over 5000 employees) and 26% of large organizations (500–4999 employees) leveraging such technologies. Moreover, when participants were asked about the areas where they anticipate expanding or initiating the use of automation or AI within the next five years, a significant 79% identified ‘Recruitment and hiring’ as the primary focus.

This growing reliance on AI in hiring processes has profound impacts on employers, HR professionals, and job seekers, and it also raises concerns about potential biases in AI-enabled recruitment tools.

Researchers Warn: AI Recruitment Tools May Worsen Bias, Not Eliminate It

As we delve into the increasing dependency on AI for hiring, it is crucial to highlight the potential pitfalls.

Despite the promising claims of AI recruitment tool developers, researchers at Cambridge University express apprehension that such tools could potentially mislead users and magnify bias rather than eliminate it.

To debunk the supposed merits of these AI hiring techniques, the researchers developed an AI tool — The Personality Machine- which illustrates how even minor changes in facial expression, clothing, or background can drastically skew personality readings. These discrepancies could ultimately determine the fate of job candidates in an increasingly competitive employment market. Furthermore, the study suggests that AI can make workforces more alike, not diverse, because AI often looks for a “perfect candidate” defined by employers, potentially overlooking varied skills and views that are key for progress.

Understanding AI Hiring Bias

Building on these findings, it becomes critical to delve deeper into the concept of AI hiring bias. By understanding its definition and origins, we can further illuminate the potential pitfalls in our pursuit of the “perfect candidate.”

What is AI hiring bias?

AI hiring bias refers to the systematic errors in a recruitment process that lead to unfair outcomes, such as favoring one group of candidates over others. In the context of recruitment, it manifests when the algorithms used for screening and selection processes make biased decisions because of the information they have been trained on or the way they interpret that information.

Put another way, AI hiring bias shows prejudice in favor of or against a certain group based on characteristics like race, gender, age, or socio-economic status, that are unrelated to a person’s ability to perform the job.

AI Discrimination Example: iTutorGroup’s Age Discrimination Case

Understanding this definition in a practical context illuminates the real-world implications of AI hiring bias. One notable example can be found in the case of iTutorGroup, which showcases how age discrimination can be inadvertently automated into AI hiring systems.

In a lawsuit brought forward by the U.S. Equal Employment Opportunity Commission (EEOC), iTutorGroup, a conglomerate of three companies providing online English-language tutoring services, was charged with AI hiring bias based on age. In this case, the company’s hiring software was programmed to automatically reject job applications from females aged 55 and older and males aged 60 and older.

This instance, which led to over 200 qualified applicants being rejected based on their age, is a clear example of AI discrimination.

Tracing the Roots of AI Hiring Bias

AI hiring bias can originate in several ways, but the two most predominant causes are biased training data and the lack of diversity in the AI development team.

Biased Training Data

AI systems learn from the data they are trained on. If the data is biased, the AI model will also be biased. For example, if an AI hiring tool is trained mostly on resumes of successful employees who happen to be predominantly young white males, it may unintentionally learn to favor applications from similar candidates.

Lack of Diversity in the AI Development Team

An AI hiring tool can end up with blind spots that reflect those of its creators. If it is predominantly developed by a homogenous group, it might not consider the needs of a diverse population. An AI hiring tool developed by a team that lacks gender diversity might favor male candidates because it fails to weigh the experiences typically found on women’s resumes.

Understanding the origins of AI hiring bias is the first step in addressing it. While the biases embedded in AI hiring tools might often be unintentional and subconscious, their effects are far from benign. The efficiency that AI brings to recruitment can be significantly overshadowed by its negative impacts, particularly if these biases are left unchecked.

Risks of using AI for hiring

The consequences are not just harmful to the individuals who face discrimination but also harmful to the workforce as a whole. Let’s delve deeper into understanding the negative consequences of AI hiring bias.

  • Exclusion of Qualified Candidates: If the AI hiring algorithm prefers certain characteristics that align with biases ingrained in the training data, it could filter out well-qualified applicants who don’t fit these narrow criteria.
  • Loss of Diversity: AI bias in hiring can lead to homogenized workplaces where employees predominantly share similar backgrounds, experiences, and perspectives. This undermines diversity and inclusion efforts.
  • Stagnation of Innovation: A diverse workforce fosters innovation by bringing in varied perspectives and ideas. If AI bias hiring leads to a homogenous workforce, this can limit the creativity of problem-solving approaches within a team and reduce the innovation capacity of the organization.
  • Socio-Economic Disparity: AI hiring bias could worsen socio-economic disparities. Candidates from disadvantaged backgrounds may lack access to the resources needed to “game” the AI system, such as understanding the right keywords or presentation styles preferred by the AI.
  • Employee Morale and Retention: Artificial intelligence discrimination can negatively impact employee morale. If employees perceive that the hiring process is biased, it may lead to dissatisfaction, lower productivity, and higher turnover rates.

As we move forward with the integration of AI into our recruitment processes, it’s essential to acknowledge these potential pitfalls and strive for an equitable, fair, and inclusive hiring process.

Mitigating AI Hiring Bias

The risk of AI hiring bias calls for specific actions to identify and reduce bias in AI hiring systems.

Importance of Human Supervision in AI Hiring

AI systems, despite their capabilities, are still tools that can inherit and perpetuate biases present in their training data or algorithms. They are not capable of moral or ethical judgment. Thus, human supervision is necessary to ensure that decisions made by AI systems are ethical, fair, and in alignment with company values and societal norms.

Human supervision also enables a feedback loop for continuous improvement. Regular reviews of the AI system’s decisions can help identify recurring issues, which can then be addressed by updating the AI’s algorithm or training data to minimize future biases.

Strategies to Minimize and Eliminate AI hiring bias

As an HR manager, you can implement the following measures to mitigate the risk of AI discrimination in hiring processes:

  1. Awareness and Training: Start by acknowledging that AI hiring bias is real and can seep into the recruitment process subtly yet significantly. Equip your team with training to understand how biases can infiltrate AI systems, leading to discrimination.
  2. Bias Audit: Make sure always to check your AI hiring systems for any unfair patterns. Look at who gets hired and see if some groups are left out more than others. If you spot something off, fix it right away.
  3. Vendor Evaluation: Before you pick an AI tool, get to know the company selling it. Find out how they train the AI, how they handle bias, and whether they follow the laws and regulations. Picking a company that takes these issues seriously can help cut down on unfair hiring.
  4. Diverse Dataset: The data used to train AI systems can carry inherent biases. Ensure the AI tools you deploy are trained on a diverse dataset that represents various age groups, genders, ethnic backgrounds, and experiences.
  5. Explainability: AI tools should provide explainability — the ability to understand and interpret the decisions the AI system makes. This is important to ensure that decisions made by the AI are not due to unobserved biases in the data or algorithms. Tools that offer explainability allow for an analysis of the reasoning behind hiring decisions, enabling the identification and elimination of any unintentional AI bias in hiring.
  6. Customization: AI tools should be customizable to fit the specific needs of your organization and the roles you’re hiring for. A one-size-fits-all approach can potentially introduce bias, as it fails to consider the unique requirements of different job positions. Customizable AI tools allow for more nuanced decision-making that respects the diversity of candidates.
  7. Complementing AI Judgement: AI systems may lack the nuance required to fully understand and assess a candidate’s experiences, motivations, or soft skills like leadership or teamwork. Humans can fill in these gaps and ensure that these critical aspects of a candidate’s profile are also considered in the hiring decision.
  8. Regulatory Compliance: Stay up-to-date with regulations related to integrating AI into the hiring processes. Compliance with regulations not only helps prevent biased hiring but also protects your organization from potential legal complications. Such examples of such laws and regulations include the Age Discrimination in Employment Act (ADEA) and the Equal Employment Opportunity Commission (EEOC) guidelines, among others.

The bottom line

While AI can revolutionize hiring with benefits like speed, consistency, and the ability to scale, it’s important not to lose sight of potential blocks, such as hiring bias and discrimination.

Addressing these issues is a proactive process — it demands active identification and reduction of biases in our AI systems and a commitment to diversity in our training data. In choosing an AI hiring tool, we need to prioritize those that champion fairness and transparency. At the same time, the importance of human supervision in AI hiring cannot be overstated.

As we navigate through the age of AI in recruitment processes, it’s essential to keep posing this critical question: ‘How can we make certain our application of AI supports, not hinders, the formation of a diverse and inclusive workforce?

--

--

BTO Insights
Beyond the Obvious

Supporting "Non-Technologists" to Thrive in the Digital Age