AI Algorithms in Hiring Processes

Burak Bakkaloglu
8 min readJan 15, 2022

AI has entered enterprise decision making processes for a while now. Human Resources, or the People Function with its new name, is late in the game adopting AI in their processes.

The exact market size of HR Tech is debatable at best, going between 17bn USD and 24 bn USD for 2021 globally, and possibly having 6% to 12% annual growth (CAGR) until 2028[1]. This growth is mainly fueled by AI based applications, promising companies efficiency and effectiveness. The HR tech will come in form of Chat Bots, automated payroll processes, learning management systems, talent development applications, career management, hiring and in all other areas of HR. According to SHRM[2], “Eighty-eight percent of companies globally already use AI in some way for HR, with 100 percent of Chinese firms and 83 percent of U.S. employers relying on some form of the technology.” And that was in 2019.

AI in Hiring decision making processes is the most debated AI usage in HR. Since the famous discrimination incident with Amazon’s hiring AI in 2018[3], the topic increasingly received attention and many academics and regulators have been raising concerns. In the last months of 2021, New York City, US is putting in place legislation that requires employers to inform job applicants if and how they are using artificial intelligence (AI) technology in hiring decisions[4], followed by the U.S. Equal Employment Opportunity Commission (EEOC) announcing their focus on the topic[5].

After this introduction, I will iterate the main problem areas of AI usage in hiring and try to articulate ways to avoid bias and discrimination that might arise through AI.

The context: How is AI used in Hiring

Hiring, or Talent Acquisition has some fundamental steps. Talent attraction, talent sourcing, screening, interview and offer process are the six stages of hiring. Even though AI can be used in all those stages, HR tech has focused on talent sourcing and screening as the primary AI use areas.

AI for Talent Sourcing:

Talent sourcing is the stage where companies source the best fitting profiles to their vacancies. Talent sourcing can be reactive, where it only ranks and filters the applications or pro-active where it searches passive candidates and approaches the best fit. AI based tools are used in in both cases. With AI tools, the companies aim to go:

1. Go through the vast number of applications in short time and filter out “irrelevant” profiles faster, without human interaction. Creating huge efficiencies.

2. Decrease bias during the sourcing of best fit, as in theory AI should provide consistent, fair, and objective selection.

3. Reach out to and identify talent that would most likely be missed by humans.

AI includes several different research areas, such as machine learning (ML), speech and image recognition, and natural language processing (NLP), face expression processing (FEP) (Kaplan and Haenlein 2019; Paschen et al. 2020). Most of the hiring AI mechanisms are in reality Machine Learning algorithms, which learn from data presented and predict or suggest an outcome for new cases. Machine Learning algorithms learn in three ways: Supervised, unsupervised or reinforced learning. Supervised and unsupervised learnings are based on set input data and a defined set of desired outputs. The algorithm learns looking at the input data it is provided and the outcomes that were desired. Having built logical correlations between the data and outcome, the algorithms set out to solve new cases. In reinforced learning, the algorithm doesn’t have a defined input, it is “let loose” in a data environment and learns trends and correlations with a defined rewards mechanism[6].

Hiring or Talent Acquisition algorithms learn mostly in supervised or unsupervised fashion. The machine learning is fed thousands of CV’s of candidates as input and also the ones that “got the job” as the desired outcome. Learning from those cases the machine learning devises its own algorithms and decides which profiles are best fit for given vacancies.

This is where the concerns start. Learning from the past practices, the algorithms will learn from the “biased” decisions of the past. And repeat and amplify those biases in large scale. This is what happened with Amazon’s AI hiring in 2018, it decided to favor men over women when deciding on the final candidates.

Even though hiring software companies state that their algorithms are “ethical” and do not discriminate any minority groups, there is good reason to doubt that statement.

Let’s look at the selection criteria closer. AI algorithms look at a few criteria to match the candidates to the vacancies:

1. Skill match. Stated skills of the candidate’s vs the skills needed for the vacant role: This is usually a simple matching of self-reported skills of candidates in their CV’s or Social Media profiles (like LinkedIn) to the skills that the hiring manager set in the beginning of the hiring process. More complex algorithms can find the skills of the past selected candidates for similar roles and look for matches of those skills even though the hiring manager hasn’t input them.

2. Relevant experience and job title match. Algorithms mostly look at simply the job title match of the candidates’ resume vs the job title of the vacant role. The years in a similar job title and the exact match of the title mostly means a higher match.

3. Other key word match. Any other key words that the recruiter or hiring manager identified are also taken into account.

4. Different than the first three criteria, which are mostly simply matching exercises, the algorithms can look for criteria that is hidden to the recruiter. As an example, if the past hired candidates are all from a specific university, the algorithms probably learns this and ranks the future candidates from that university higher than other candidates.

Possible risks with those criteria:

1. CV or social media profile design savviness become predominant for candidates to get short listed by the algorithm. The candidates’ skills in strategically placing key words in their profiles matter more than their actual skills needed for the role.

2. If an algorithm is trained on historical employment data, integrating an implicit bias that favors white men over Hispanics, then, without even being fed data on gender or ethnicity, an algorithm may recognize patterns in the data, which expose an applicant as a member of a certain protected group, which, historically, is less likely to be chosen for a job interview. This, in turn, may lead to a systematic disadvantage of certain groups, even if the designer has no intention of marginalizing people based on these categories and if the algorithm is not directly given this information (Barocas and Selbst 2016, Kochling and Wehner 2020). Example: Some algorithms decoded gender and ethnicity differences from the profile pictures of the applicants and using the past selected candidate pictures, without being detected, the algorithm can put a bias based on gender and ethnicity. Even though the gender and ethnicity information are not fed to the algorithm to avoid bias, the algorithm finds correlations. Pictures are the simplest example, but algorithms can detect differences based on language used in profiles, home addresses of applicants, application times during the day etc. Since the algorithms are mostly “black boxes”, and lack the transparency of the criteria they use, it is impossible for HR professionals to proactively address those.

3. Employees and applicants don’t see the algorithm-based hiring decisions as fair and objective, and they demand a human decision making in hiring processes (Kaibel et al. 2019; Langer et al. 2019; Lee 2018). The perceived unfairness of the hiring process has effects of employee engagement, turnover rates, individual productivity, and employer branding (Walker 2012, Lee 2018; Tambe et al. 2019; Cappelli et al. 2020)

AI for screening:

Companies increasingly use algorithms to screen the applications. After sourcing stage, companies usually end with a “long list” of applicants. The screening is the process in which the most relevant short-listed applicants are selected for hiring manager and HR interviews. The long list or the number of applicants “screened” varies between companies. It is healthy to assume that number will be somewhere between 8 to 15. Most companies have 3–5 candidates for interview process.

AI is used in two main ways here: Analyzing video recordings of the applicants as screening or using gaming as part of the screening. Most of the companies use recruiters together with the AI solution for now. In the future AI can take over the whole process.

AI in Video Recording:

There are numerous HR Tech companies focusing on Recorded Video Interviews. Applicants are given a set of questions, or they are asked by a bot a set of questions. The answers of the applicants are recorded and used to select the better fit candidates. Most of the companies still use recruiters and hiring managers to review the recordings and let them do the screening. In the past few years, AI based companies also provide analyses of the videos based not only the content of the answer but the facial reading of the applicants, the voice tones, the stress level of their voice, and similar aspects in the video. The algorithm then gives a summary of the applicant’s confidence level in the stated skills, some competencies and traits based on their facial expressions and tones.

The risks of using AI here are similar:

1. Almost always the algorithm is a black box, companies mostly don’t know how the algorithm interprets the facial expressions, or voice tone. Or the validity of the outcome.

2. Algorithms, based on the training information they are fed, can have biases towards some groups without knowing they have. Example: We know AI might be able to detect sexual orientation from faces of humans (Wang, Kosinski 2017[7]). What if the AI learns from the training data that one group of humans with a specific sexual orientation should be selected over other groups without identifying it as sexual orientation? Ethnicity characteristics are much easier to read from face recordings, what if the algorithm decides to favor a specific ethnicity based on past data.

3. Applicants and in general employees will be uncomfortable to know that AI will be decided on their competence levels and their compatibility to the job requirements.

The risks are very similar to the ones in sourcing stage, however facial expression processing (FEP) is relatively an advanced algorithm, which makes it even more difficult for humans to decipher what the algorithms looks for.

AI in Gaming:

Gaming in the screening stage has been utilized past few years in some companies for more entry level positions. The games are mostly designed to test cognitive abilities, there are some gaming solutions which also can test job-based skills and personality traits like risk aversion, attention to detail and ability to focus.

Companies like pymetri[8]cs claim that they will provide behavior insights of the candidates through gamification and AI built in that process. The gamification has been around at least for the past 5 years and now it is getting more traction.

Threats are similar, a black box AI, no links to scientific peer reviewed facts about linking gamification to psychometrics and possible hidden biases we as humans cant detect.

Summary and what I haven’t covered yet:

The AI powered matching of candidates and the unconventional assessment tools like facial expression processing are great leaps forward in hiring world. So far, biases have been a strong influencer of the hiring processes and created an accumulated unfair context for candidates. We need to be sure that the algorithms and machine learning do not amplify the injustice, but they become a force of neutrality and objectivity.

I am very hopeful of the future of hiring, powered by algorithms and partnered by humans. Just like in medicine, where we let the AI provide input to the expert radiologists and (for now) a competent human oversees the decision.

What I haven’t covered: The possible mitigations to those risks are not complex but they require investment and focus. I will clarify those in the future.

Thank you.