7AI blunders that occurred in HR

--

There seems to be increasing market pressure to accelerate the adoption of (generative) AI in HR processes. Everyone wants concrete results quickly, sometimes to the point of forgetting that this is not just another technology. Amazon, Facebook, Palantir, and others will remember this for a long time… So, to remind us why we need to prepare the ground carefully, integrate reflection and even foresight before getting our hands dirty, I offer you some food for thought, and concrete examples with the 7 biggest AI blunders in HR. Let’s dive in!

AI & HR: some concepts before we begin

AI is a technology whose ultimate goal is to learn, think, and interact like a human being. But in reality, in 2024, we’re nowhere near that. As a technology, AI presents major risks of bias and errors, particularly in the HR field, for several fundamental reasons (see this Harvard Business Review article). To better understand this, we need to revisit (in non-technical terms) the design and functioning of an AI model.

Generative AI, which is making noise right now, is primarily models trained on colossal amounts of content (26,000 years of content to read in the case of ChatGPT, for example). In this content, there’s good and not-so-good, sometimes fake news, junk content, and even worse. Then, this data is processed by algorithms designed by often stereotyped engineers (often men), and these algorithms are like simplified models of the world, which will often exclude what falls outside the accepted norm.

Typically, in HR, AI models are trained on historical data that may reflect and perpetuate existing societal biases. Logically, this can result in unintended discrimination in recruitment, promotion, or evaluation processes. For example, if a company has historically favored certain profiles for leadership positions, an AI algorithm might reproduce this pattern, even if it’s not intentional or desirable.

But what’s most peculiar in HR is that AI algorithms can struggle to grasp the nuances and complexity of human interactions, which are essential in the HR field. They offer recommendations based on statistical correlations rather than causalities, which can lead to erroneous conclusions. For instance, an AI might associate performance with irrelevant factors like the frequency of use of certain keywords in evaluations, rather than with actual indicators of competence and productivity.

There’s a final layer that increases potential biases, which is the lack of transparency and explainability of certain AI models. This means we can’t say why or how an AI made one choice over another. Imagine the problem for CV selection! In total, we have a series of bias-generating layers, amplifying each other, and without a direct possibility (for now) of reconstructing the AI’s ‘reasoning’.

In reality, for an AI to function in HR with the least possible bias, we need to dictate its conduct, generally with what we call an expert model (we’re preparing an article on how to implement this on an AI system). If the AI is to analyze CVs, we must explain precisely how it should do so. And that’s a problem in HR, since practically no company currently has a documented procedure for manually analyzing a CV. From there, how can we make a machine do it soundly? You’ll understand that before rushing into the tech, it will be necessary to formalize the best practices of the processes we wish to automate. Welcome to the wonderful world of AI for HR, where it doesn’t start with AI at all. QED :)

7 real stories of AI blunders in HR

The hasty adoption of AI in HR processes has sometimes led to embarrassing, even scandalous situations that have made headlines. These case studies serve as a stark reminder of the importance of a thoughtful and ethical approach to integrating these technologies. Among the most striking examples, many HR professionals generally know about Amazon’s case in 2015, where their recruitment AI had to be abandoned because it systematically discriminated against female applications, having been trained on biased historical data. But there are others, many others! Here I offer you the 7 main AI blunders in HR.

Note: This is in no way meant to attack or point fingers at the actors mentioned below. First of all, at least they tried to innovate! They are presented as examples to remind us of the importance of taking an essential step back when integrating AI models into HR processes. And above all, these examples have been publicized, but almost the entire market could be featured here, with CV sorters and other mechanical gimmicks of the last decade. So chill, we’re here to learn from past mistakes :)

Amazon and the biased recruitment algorithm (2018)

  • The case: Use of an AI algorithm to automatically sort and evaluate candidate CVs. The algorithm favored men for technical positions, due to historical biases in the data used to train the model.
  • Best practice: Implement a human validation process for AI decisions, use diverse and balanced training data, and supervise AI with control AI to detect and correct potential biases (e.g., Conditional Demographic Disparity test).
  • Learn more: Reuters Article

HireVue and the biases of its CV scoring algorithm (2019)

  • The case: Use of an AI algorithm to evaluate candidates from interview videos, analyzing facial expressions and voice. The algorithm was criticized for its potential biases, particularly towards people with disabilities or from minorities.
  • Best practice: Limit AI use to an advisory rather than decision-making role, always combining its results with human evaluation, and regularly audit algorithms to detect biases, and above all, update in view of GDPR/AI Act obligations!
  • Learn more: The Washington Post

IBM and its biased facial recognition AI (2020)

  • The use case: Use of facial recognition technology in various contexts, including recruitment and security. Facial recognition systems showed racial biases, particularly affecting the identification of non-white individuals.
  • Best practice: Avoid using facial recognition technologies in HR processes and favor more neutral and objective evaluation methods. Easy in France, it’s forbidden too!
  • Learn more: New York Times

iTutor and age discrimination (2023)

  • The use case: Use of an AI algorithm to pre-select candidates for online teaching positions. The algorithm was accused of systematically rejecting older candidates, leading to age discrimination. iTutor had to pay a fine of over $300 million to the EEOC.
  • Best practice: Implement regular non-discrimination checks in recruitment algorithms, verifying the distribution of results by age groups. Be careful not to fall into prohibited statistics!
  • Learn more: Bloomberg

Starbucks and its poor AI staffing prediction (2024)

  • The use case: Use of AI software by Starbucks to predict employee staffing. The tool was publicly criticized for causing staffing errors that were not questioned.
  • Best practice: Integrate a feedback mechanism from employees and managers into the prediction system, allowing real-time adjustment of forecasts, and include broader contextual factors in predictions.
  • Learn more: Bloomberg

Facebook and the biased job ad targeting algorithm (2019)

  • The use case: Use of algorithms to target job ads on the Facebook platform. The algorithms were accused of distributing job ads discriminatorily based on age and gender, and Facebook faced several lawsuits on the subject in 2018 and 2019, the year the company withdrew the algorithm.
  • Best practice: Use targeting criteria based solely on skills and experience, completely excluding sensitive demographic data. In France, such targeting based on age or gender is not allowed.
  • Learn more: Princeton University

Palantir and Unintentional Racial Discrimination (2016)

  • The use case: Use of application filtering software. The US Department of Labor accused Palantir of systematic discrimination against Asian candidates since 2010, and the case ended up in federal courts, with the threat of termination of government contracts.
  • Best practice: Implement an application anonymization system, removing all information that could indicate ethnic origin before evaluation.
  • Learn more: US Department of Labor

Since then, all these actors have taken corrective actions to restore the legality and ethics of their processes. And given the bad publicity, most have greatly strengthened their ethical frameworks and systematic controls to ensure healthier automation of their HR processes.

If you know of other proven and documented examples, don’t hesitate to send them to me via my LinkedIn messaging!

The reasons for slow AI adoption by HR

Beyond these blunders that can act as scarecrows, and rightly so, there is a broader context to the slow adoption of AI by HR. I would even say that this context explains most of the blunders mentioned earlier on the use of AI in HR use cases. I recently published a detailed article in the Harvard Business Review to lay out these 5 reasons. I obviously invite you to read the article, and I offer you a preview of the 5 major challenges to overcome without delay:

1) Data confidentiality

This is the crux of the matter! HR data is ultra-sensitive and GDPR is watching closely. And now even the AI Act is being added on top (I’ve written a concrete article on the subject if it can help you). In short, security will need to be seriously beefed up to avoid embarrassing leaks.

2) Ethics and legal issues

We’ve seen it above with some of the blunders, AI that discriminates, no thank you! It’s crucial to minimize algorithmic biases and keep humans in the loop. The legal framework is still unclear, but it’s going to move quickly, and above all, it’s not forbidden to adopt good practices and your own documented internal ethical frameworks!

3) Data quality and practices

Garbage in, garbage out as they say… HR data is often a real mess, because it touches on human elements that are difficult to quantify, or simply because the data culture is not there (often that’s the case). A major cleaning and standardization project is necessary before unleashing AI on it. I’ve written an article on a concrete and structured approach to using data in HR. AI won’t work magic if it doesn’t have the right practices and data already defined. Don’t forget that ChatGPT is the most biased technology in human history!

4) AI skills in HR

Let’s face it, not everyone in HR is a geek. Training will be necessary, tech profiles will need to be recruited, and partners will need to be carefully chosen to avoid mistakes. Today, these profiles are often in innovation or IT departments, and that’s good. But the HR function will need to equip itself, because HR data is particular and requires more depth of analysis. In the United States, for example, they talk about ‘social data scientists’, experts in algorithms and data who are also specialized in psychology or sociology.

5) Added value and cultural buy-in

Quantifying the ROI of AI in HR is no small feat, and it’s often complicated for an HR Director to convince their executive committee to allocate more budgets for innovation. On the other hand, convincing all employees that AI won’t dehumanize the company is a real cultural challenge. Between a rock and a hard place, the HR function is under strong pressure with AI, and it’s probably the function in the company where it will be the most complex and complicated to implement.

But let’s not despair! These obstacles are not insurmountable. Generative AI has enormous potential to transform HR. We just need to be patient, prepare well, and move forward step by step. If you’re at this point, I can already offer you the first three very concrete steps to get started:

  1. An ethical charter for AI in HR,
  2. The FORMALIZED review of your practices into good practices ready for digital :)
  3. An impact study of AI on the transformation of processes, roles, and professions in the HR function, it’s IN-DIS-PEN-SABLE!!!

Of course, in parallel, it is highly recommended to conduct experiments, but this implies defining in advance what you are trying to prove and measure. An experiment is meant to be ‘learning’ rather than productive. In other words, we try, we learn and we adjust, and when we have improved enough, we prepare for larger-scale deployment (while continuing to measure, we never stop doing that!).

Conclusion

As you will have understood, it is more than important to take a step back on the adoption of AI in HR processes. The examples of blunders presented earlier are not there to discourage innovation, but rather to remind us of the importance of a thoughtful and ethical approach. AI is not a simple technology to deploy, it’s a paradigm shift that requires a profound transformation of business practices and cultures. The challenges are numerous, from data management to ethics, through the development of new skills, but they are not insurmountable.

The future of HR is not a binary choice between human and machine. It’s about finding the right balance, where AI augments rather than replaces human capabilities. This human-machine synergy could well be the key to more efficient, fairer, and more human HR, provided we move forward with awareness and responsibility. I can’t stress it enough, but the choices we make today determine the future we will have to manage tomorrow. So more than ever, we need to project ourselves and take a step back to better understand the stakes of the concrete things we could build today. And beware of those who would want to make you move too fast on AI with shortcuts and tips, it smells like a blunder ;)

— —

[Article created on 11 August, 2024, by Jeremy Lamri with the support of the Claude 3.5 Sonnet and GPT-4o for structuring, enriching, and illustrating. The writing is primarily my own, as are most of the ideas in this article].

— —

Follow my news with Linktree

If you are interested in the combination of web 3 and HR for the future of work, I invite you to subscribe to the dedicated newsletter that I keep writing on the subject, and to read the articles that I have written on the topic:

--

--