AI in Hiring: Why Some Don’t Make the Cut

Earl Red
5 min readNov 29, 2023

--

How do job algorithmic filters work? Do they hinder diversity in recruitment?

Introduction

In recent years, the use of algorithms and artificial intelligence (AI) in the hiring process has become increasingly common. These technologies are used to screen job applications, assess candidates’ skills and qualifications, and even predict their potential for success in a role. While proponents argue that AI tools democratize the hiring process by providing a fair assessment of all applicants, critics raise concerns about their accuracy, bias, and impact on diversity and inclusion.

In 2021, M. Burch applied for a data analyst position at Bloomberg in London. He received an immediate response asking him to take a digital assessment. As he went through the assessment that involved figuring out patterns with different shapes, he couldn’t help but question whether this was truly testing his abilities relevant to the job.

Burch’s experience is not unique. Many job seekers have encountered automated assessments and algorithms during their application process, raising questions about how these technologies affect recruitment outcomes. Do they really improve efficiency and save costs for companies? Or do they inadvertently exclude qualified candidates based on irrelevant criteria or biased patterns?

Here, we will explore recruiters’ reactions to AI-submitted applications and delve into the pros and cons of using algorithmic tools in hiring processes. We will examine the potential biases that can arise from these technologies and discuss how companies can ensure fairness while leveraging AI solutions.

The Role of Algorithms and AI in Hiring

The use of algorithms and AI tools in hiring is widespread across leading job platforms like LinkedIn, ZipRecruiter, Indeed, CareerBuilder, and Monster. According to ZipRecruiter CEO Ian Siegel, at least three-quarters of all resumes submitted for jobs in the US are read by algorithms. This indicates that machine learning algorithms play a significant role in initial screening processes.

The primary motivations behind using algorithmic tools are efficiency gains and cost savings for recruiters. However, according to a survey by research firm Gartner, 88% of executives using AI in their recruitment processes acknowledged that their tools reject qualified candidates. This suggests that while algorithms offer efficiencies, they may also exclude potentially suitable candidates.

How it works

Job algorithmic filters, employed by various hiring platforms and organizations, are designed to streamline the recruitment process by quickly identifying candidates who best match the job criteria. These algorithms primarily scan through resumes and applications for specific keywords and phrases that align with the job description. They look for indicators of relevant experience, skills, and education. For instance, if a job requires expertise in Python programming, the algorithm will search for “Python” in the applicant’s resume. Additionally, these filters often assess the length of experience in a particular role or industry, educational qualifications, and specific certifications or technical skills. The idea is to narrow down the pool of applicants to those who most closely fit the job’s requirements, based on the textual data provided in their applications.

However, these algorithmic filters go beyond just matching keywords. They can also be programmed to evaluate the structure and format of a resume, favoring those that are well-organized and easy to read. Some advanced algorithms are capable of interpreting more nuanced aspects like job progression and stability, inferring a candidate’s career trajectory from their work history. They might also assess soft skills implicitly through the analysis of language and activities, though this is more complex and less common. Despite their efficiency, these filters are not without drawbacks. They can inadvertently screen out potentially qualified candidates who don’t use the exact keywords or phrases the algorithm is programmed to detect, leading to concerns about fairness and the overlooking of diverse talents.

Unintended Consequences: Biased Criteria and Discrimination

Algorithm-based hiring tools can inadvertently perpetuate biases and discrimination present within historical data used for training. Researchers have found biased criteria unrelated to work being used as predictors of job success. For instance, names like “Thomas” or keywords like “church” were identified as factors influencing selection outcomes.

One example of algorithmic bias was highlighted when Amazon discovered that its resume screener tool was biased against women. The algorithm had been trained on resumes of current employees, who were predominantly male due to gender disparities in certain tech fields. As a result, the AI system favored male applicants and discriminated against female candidates

In one system examined by employment lawyer Mark Girouard, the name “Jared” and having played lacrosse in high school were used as predictors of success. These examples reveal how seemingly irrelevant personal attributes can be wrongly associated with job performance.

Burch’s experience at Bloomberg sheds light on another aspect of algorithmic biases. He found out that he had been rejected based on his performance on a personality test created by an AI company hired by Bloomberg. Burch’s qualifications aligned with the job description, making it puzzling why he was excluded based on this assessment alone.

technology

The issue raised concerns about whether personality tests or pattern-finding assessments truly capture an applicant’s suitability for a specific role or if they are merely arbitrary measures prone to bias and misinterpretation.

How can algorithm-based hiring improve ?

  1. Diversification and Regular Auditing of Training Data: One of the primary sources of algorithmic bias is the data used for training AI models. It’s crucial to ensure that this data is diverse and representative of the entire talent pool. This involves including resumes and profiles from a broad range of candidates, encompassing different genders, ethnic backgrounds, and career paths. Regular audits of the training data and the algorithm’s decisions are also necessary. These audits should be conducted by independent parties to identify and correct biases that the system might have learned.
  2. Incorporation of Fairness and Ethical Guidelines: Establishing and enforcing fairness and ethical guidelines in AI development and application is vital. This includes setting clear criteria for what constitutes discrimination and bias within the context of hiring. AI developers and users should collaborate with ethicists, sociologists, and other relevant experts to understand the nuances of fair hiring practices. Additionally, these systems should be transparent about how they operate, allowing both candidates and employers to understand the basis on which decisions are made.
  3. Human Oversight and Candidate Feedback Mechanisms: Integrating human oversight is essential to counterbalance the limitations of AI. Human HR professionals should be involved in reviewing and interpreting algorithmic recommendations, especially in borderline cases. Moreover, providing a feedback mechanism for candidates can help in identifying issues with the AI system. Candidates who believe they have been unfairly assessed could have the option to appeal or provide additional context, giving companies a more holistic view of the applicant and an opportunity to refine their AI tools.

Algorithm-based applicant screening is the future of recruiting, however it needs to significantly improve. They need to improve the quality of matching applicants while eliminating bias.

Additional Resources

--

--