AI in Hiring: The Promise, The Pitfalls, and The Path Forward

Michael Bagalman
Data Science Rabbit Hole
9 min readApr 26, 2024
images by Michael Bagalman & Canvas “Magic Media”

TL;DR

  • AI is rapidly becoming an essential tool in recruiting and hiring.
  • AI offers great benefits, but also runs the risk of systemic bias.
  • Governments at every level are looking into regulations; AI-recruiting companies, both new and established, are working to tailor their services.
  • The true potential of AI is not in quickly winnowing the candidate pool, but in being able to cast a wider net and find star employees from nontraditional backgrounds.

Introduction

AI has burst onto the scene like the iPhone on steroids. I wrote my master’s thesis on artificial neural networks back in the 90s and got my first job at Bell Labs based on that expertise; I’ve been a data scientist for a long time now, following the progress of machine learning and natural language processing. But even I am in shock at what generative AI is accomplishing today.

Companies are figuring out how to make best use of AI at the same time as society struggles to adapt norms, regulations, and laws. And right now you may be selecting which AI system you want to use, but we have already begun an era in which the AI systems will be choosing the humans who will use them. AI is an active player in recruitment and hiring.

And that’s a big deal with potential consequences that we need to understand.

Danger, Will Robinson, Danger!

In the best case scenario, AI can help us overcome human bias, yet the greatest danger in AI’s use for recruitment is doubling-down on bias. Clearly, dealing with bias isn’t new. My own experience with my fellow humans serves well as an example. I’ve had hiring managers inadvertently write job descriptions using only male pronouns: “The senior data engineer will be responsible for ensuring all data pipelines have 99% uptime on reticulating splines. He will manage the two junior engineers who twist the dials and pull the levers that keep the data flowing.” Not very conducive to hiring women, is it?

Some years back I was working with a Chief Strategy Officer who wanted to hire a mid-level strategist; he wanted something like 5–7 years of experience. “We should hire a guy about 30 years old,” he told me. See any problems there?

And outside of concerns about the sort of bias that can wind up sending you to the EEOC penalty box, technical recruitment runs the risk of bias that just misses out on hiring good employees. If I tell the recruiter that I need a candidate with SQL skills, they look for “SQL” on the resume, but some candidates might write “MySQL” or “PostgresQL” or even just talk about their database skills without directly addressing SQL. If I ask for someone with predictive modeling skills and a resume claims great experience with “xgboost, SVM, regression, random forest, and multilayer perceptrons” then I want to talk to that candidate, but the recruiter might not bring them to my attention because “modeling” wasn’t written there.

But at least with a human they might think to ask me, “Hey… I haven’t been seeing anyone with SQL skills. Do you want to talk to any of these MySQL or T-SQL people?”

Computers are Very Smart Dumb Machines

Algorithms do what you tell them to do, not what you want them to do. Generative AI systems are getting better, but they make decisions based on the data they trained on. And they trained on the Internet. Have you seen the Internet? Yuck.

This isn’t new. Reuters reported that Amazon was working on algorithms to evaluate resumes as far back as 2014. The team discovered that the system was downgrading resumes from women, but in a showcase of how smart these dumb algorithms can be, simply preventing the system from downgrading based on sex (explicitly) simply led to the system identifying patterns in word choice that distinguished between the sexes; for example, men were more likely to use terms like “executed.”

AI may generate bias through seemingly innocuous means. A candidate who speaks multiple languages could be a valuable hire in many situations. Letting AI take “language” into account from resumes seems to make sense. And what happens if the AI starts downgrading resumes that speak languages from African countries, or downgrading candidates who speak Arabic, or upgrading resumes for speaking Japanese or Hebrew? And if we program the AI to ignore languages, are we sure it will be able to distinguish between spoken languages and computer languages? If I need a Python programmer, the linguist well-versed in Urdu might not be able to help me.

And how far do we go? If you send me your resume, is it OK that I look up your LinkedIn profile? Can I analyze your Twitter (OK… fine, “X”), Instagram, and Facebook feeds? They’re public, aren’t they? What is the expectation of privacy? Or of freedom to express ourselves outside the workplace without career consequences? Or of ever-illusive transparency, to be able to be told why we didn’t (or did!) get the job?

I could go on and on, but you likely already know the dangers.

Ethical Considerations

Keeping within what’s legal is table stakes. An AI is not sentient; AI has no agency of its own, and so it has no moral accountability. The duty to use AI responsibly lies with us.

Immanuel Kant’s Critique of Pure Reason is a monumental work in the history of philosophy. His “categorical imperative” is a central concept in his deontological moral philosophy. It states that one should act only in ways that could become a universal law that everyone follows. Kant believed human beings have inherent worth, and should never be treated merely as instruments or means to an end. Candidates should be evaluated based on their skills, experience and merit; their humanity and dignity should be respected throughout the hiring process.

In more modern times, John Rawls’ “veil of ignorance” is a philosophical thought experiment he uses in his Theory of Justice to argue for fair and equal treatment. The veil of ignorance asks us to imagine we are designing a just society without knowing what position we would end up occupying in that society — whether rich or poor, employer or employee, dominant majority or marginalized minority. Applied to recruitment, the veil of ignorance asks employers to consider: how would you want to be treated if you were the applicant, without knowing your qualifications or demographics?

And then there’s the evergreen classic of the Golden Rule!

On top of all that, public opinion matters a lot. Even if you stay within the law and have a strong case to make that your recruitment policies are ethical, just a single news item (true or not) about a screw up in your recruitment process could bring public pressure to bear against your business, your products or services, and your stock price. As we used to say in advertising, “Perception is reality.”

Utilitarianism

Balance all of this against competitive advantage. If you can stay within the law and sift through ever larger numbers of resumes, relying on subtle data clues to find potential employees who are just a few percentage points better on some measures, and maybe a few percentage points cheaper to employ, why wouldn’t you? If you don’t do it, your business will lose to the business that does, won’t it?

These questions aren’t unique to AI and recruiting; you face them in every sphere. For your accounting team and your quarterly reports, there is wiggle room in the rules of financial standards. How far are you willing to go to “manage” your reported numbers? If you have a problem with a product, how bad does it have to be for you to start a recall? And how do you decide? Are you old enough to remember the Ford Pinto fiasco exposed by Mother Jones magazine? (Google it!)

Ancient history, you say? Read about the rushed development and downplayed safety concerns for the Boeing 737 MAX. Maybe the Peleton treadmill recall. Try googling “Wells Fargo AND fake accounts 2016” to get a little reminder of how easily humans leave behind their values in pursuit of moving the needle on a KPI past an arbitrarily assigned point. And if you’ve never heard of it, a quick search for “Uber AND Greyball” might be enlightening!

Of course we haven’t had a big scandal with AI and recruiting… yet. Just recently in August 2023, the EEOC got a tutoring organization to agree to a $365k settlement after accusing them of explicitly programming their software to reject candidates over a certain age. If in this day and age we are still dealing with that sort of nonsense, who is going to have time to monitor for the more subtle forms of bias?

So What’s the Bottom Line?

The bottom line is that no one knows how this is all going to shake out, so plan for changing plans and remain agile.

Leading recruitment firm Indeed this year conducted its first-ever round of layoffs, but mostly left their AI team intact. Their CEO, Chris Hyams, has talked about a “cyborg” model of recruiting where humans and AI work in tandem. Companies like RecruitBot are bringing AI to HR recruiters, even offering A/B testing in outbound recruiting campaigns. And if you don’t have children you might not be familiar with the incredible digital universe of Roblox, but they’ve put their money where their mouth is and created a career center within the Robox online experience.

Many companies will likely need to rely on outside firms to help them navigate the waters. State legislatures are increasingly focused on oversight and regulation of artificial intelligence systems, prompted by growing use of AI technology and concerns about ethical impacts; in 2023 alone, over 25 states introduced AI legislation, with new laws focused on studying AI’s effects, preventing discrimination in governmental AI usage, and defining legal protections related to AI systems that may be used in areas like hiring.

Examples of new state laws include Connecticut requiring an inventory and assessments of AI systems used by state agencies to prevent discrimination, and Louisiana calling for a study on AI’s impacts. And it isn’t just states: If you don’t live near NYC, would you know that the Big Apple passed a city law requiring annual audits of AI recruitment systems? And who knows where the federal government will land on this. This wave of laws and proposals shows rising government interest in monitoring AI systems, including recruitment and hiring technologies, even though efforts remain piecemeal for now.

Ultimately we should seek to limit the effects of bias within AI not only for the legal and moral reasons, but because rejecting candidates for the wrong reasons decreases the pool of potential applicants. AI can undoubtedly improve recruiting efficiency by helping, say, the data science hiring manager (like me) filter out resumes that don’t have required skills in Python or SQL or other concrete qualifications, but AI can’t read the subtle clues reliably.

I have years of experience in this and can quickly tell which resumes are going to lead to great interviews and which aren’t, but when I try to explain the difference between two resumes to the HR recruiter in English, it feels like cleaning the Augean stables or fighting the Nemean lion. And even then, the most expert human managers still sometimes make bad hires. Do you expect better of AI?

And remember that this is a two-way street: Candidates are already working to game the system, finding ways to craft their resumes that hit the right notes for the AI. Undoubtedly a whole industry is popping up to help them. I always want a human being as the primary reviewer of resumes; after all, even as a child I learned the important lesson: It takes one to know one.

If anything, AI might help us widen the funnel rather than focus on narrowing it further. The great potential of AI in recruitment is the ability to find nontraditional candidates who are surprisingly great candidates. When I was leaving my first leadership role at a company to start a new opportunity, one of my direct reports, who had preceded me at the business, asked to meet with me. She was a statistical data analyst whose background was an undergrad degree in pharmacy and an MBA in marketing.

“I just needed to tell you,” she said, “that I know you would never have hired me if I had applied for the job after you got here.”

With hardly a moment’s hesitation I replied, “You’re right. I wouldn’t have even interviewed you. And I’d have been wrong.” A few years later I hired her for my consulting firm.

The backgrounds of some of my best data science hires since then include a psychologist, a former high school teacher, and a former professional dancer.

Conclusion

Don’t delegate your recruiting to AI; work with your AI as a partner to find the diamonds in the rough, the needles in the haystack, and the star employees amidst the large and diverse applicant pool. Recruiting has never been easy and AI isn’t going to make it easy, but it might help you build a better team.

Without technology, recruiting is like fishing with your hands, always struggling to snag a slippery fish in the shallows. Technology gives you a boat and a net. Cast your net wide.

Talk to Me

These are my thoughts. I’m not always right. Not by a long shot. I welcome your insights and comments.

--

--

Michael Bagalman
Data Science Rabbit Hole

Michael Bagalman is a data scientist and founder of the Data Science Rabbit Hole.