What Are the Ethical Issues Associated with Bringing AI Into the Hiring Process?

Megan Foley
3 min readJul 25, 2024

--

Dark blue background, outline of three black mechanical gears (2 are only partially in the picture). The spokes are replaced by the outlines of male and female stick-like figures.
Image by Wynn Pointaux from Pixabay

AI and its use in hiring and recruiting is on my mind a lot lately. First off, because I am an HR Professional, and I pride myself on always being up on the latest technology. In fact, I used GenAI in my last job before becoming a full-time student in January. Second, because I am applying for jobs, and many of the applications ask me if I want to opt-out of screening by AI.

The topic of AI in hiring is important because current employees and potential job candidates are very concerned about how, specifically, AI will affect them and whether they will be discriminated against. Companies want to find the best job candidates and are always looking for more efficient ways to do that, and their legal teams want to ensure they are doing so fairly, without risk. People want to find meaningful work and be employed with organizations that are diverse and engaging, and employ fair methods for finding talent.

Vendors who are creating AI-based software for the hiring process claim their products improve the efficiency of the recruiting process while eliminating bias. Concerns have been raised by others related to the preferences built into their systems by the data they are trained on. There is a case from Amazon, where the company used past job performance data from its workforce that was predominately made up of white men to create a new AI tool, which carried into its algorithm favoritism for white male applicants. Haight and Johnson developed a tool to help organizations determine how to implement AI recruiting technologies ethically, and puts these principled concerns into four major categories which are: privacy concerns, performance of AI, transparency and accountability, and job concerns. (Haight & Johnson, 2023, p 1–3)

Under this ethical framework, privacy concerns include data storage, collecting of information that is irrelevant, gathering data from sources that may be inappropriate to access from a human resources perspective, and sharing of highly personal data without candidate consent. When it comes to the performance of AI, it has a history of showing bias, as data that is taken from current and past jobs comes with the risk of perpetuating privilege. There have been many questions about AI’s accuracy and validity in recruiting and assessment, and many tools have not been scientifically validated. The question of who is responsible is of most concern in the ethics of transparency and accountability; whether the accountable party is the maker of the tool, or the organization who is employing it. Many creators of AI tools are unwilling to share their algorithms, which can lead to a lack of trust. Some recruitment professionals are concerned about keeping their jobs in an environment with an increased use of AI. (Haight & Johnson, 2023, p 3–6)

Check out my other blog posts, as I look at new laws related to AI and hiring, and how organizations can set policies regarding AI adoption.

Reference:

Haight, A. M., & Johnson, E. S. (2023). Coping with Ethical Challenges in AI Recruiting. Handelshøyskolen BI.

--

--

Megan Foley
Megan Foley

Written by Megan Foley

0 Followers

Student at the University of Minnesota wrapping up a Bachelor's Degree in Multidisciplinary Studies with a focus on Business, Human Resources, and Diversity.