Why was your job application rejected: Bias in Recruitment Algorithms? (Part 1)

Merve Hickok
14 min readJul 22, 2020

Originally published in Montreal AI Ethics Institute’s newsletter on July 12,2020. It is part 1 of a 2-part series on bias in recruitment algorithms.

Photo by Christina @ wocintechchat.com on Unsplash

Humans are biased. The algorithms they develop and the data they use can be too, but what does that mean to you as a job applicant coming out of school or looking to move up to the next step in career ladder or considering a change in roles or industry?

In this two-part article, we walk through each stage of recruitment (targeting, sourcing/matching, screening, assessment and social background checks) and explore how some of the AI-powered commercial software used in each stage can lead to unfair and biased decisions.

In the fast-moving world of technology, AI has particularly expanded into many domains of our personal and business life. Whether you are aware of it or not, algorithmic decision making systems are now prominently used by companies as well as governments to make decisions on credit worthiness, housing, recruitment, immigration, healthcare, criminal justice system, pricing of goods, welfare eligibility, college admissions — just to name a few. Despite the high stakes and high impact of these decisions on an individual (not to mention the society as a whole), the landscape still greatly lags behind in…

--

--

Merve Hickok

Founder of AIethicist.org; Research Director at Center for AI & Digital Policy; Data Science Ethics Lecturer at University of Michigan — School of Information