Your lazy brain: Why AI should make hiring decisions

Daniel Byrne
9 min readOct 9, 2018

--

Here’s a number: 45.

Now I want you to guess how many seconds, on average, a CV is reviewed before a decision is made to progress it or not. So, what is your guess? 50 seconds, 40, 30? The actual answer is six seconds per CV. That’s right. If you are applying to the average job, it’s highly likely that your application is reviewed quicker than it takes to read this sentence.

Recruiters nowadays have a fundamental challenge. High volumes of applications with little time to identify those that best fit the role. It’s common for 250+ candidates to apply for a single job, and there’s often many places up for grabs. This, combined with decreasing or stagnant resources available to recruiting teams, means that each candidate has a smaller and smaller opportunity to impress, and their CV is often the only way to do so. Additionally, candidates are also at the mercy of human bias and irrational thinking, so meaningless details can affect their chances of pursuing their dream career.

You might be wondering about the number 45. Well, there’s a good chance that this seemingly random number affected your guess at the beginning of this article. This cognitive bias is known as the ‘anchoring effect’. Suggesting a high ‘anchor’ such as 45 (even though it has no relation to the question) likely made you respond with a higher answer than if you had been given a low anchor such as 2. The anchoring effect is one example of your brain factoring in irrelevant information, and using mental shortcuts to make quicker decisions. There are over 150 discovered cognitive biases, all of which cloud our decisions and lead to worse choices. If a cereal is at eye level in a supermarket you’re more likely to buy it than one at floor level. If you have a friend called Tim who’s a great salesman, you’ll be more likely to hire a man called Tim for your open sales position. Your CV is more likely to be progressed if it’s reviewed after lunchtime compared with before. The truth is, humans are highly emotional creatures and not the rational beings that economists suggest. Irrelevant factors are making our hiring processes inefficient and ineffective.

System 1 vs system 2

Your brain has two systems for making a decision. System 1 and system 2. System 1 is intuitive and makes quick choices. These are automatic, unconscious and require very little effort. System 2 is more rational, taking longer to make more accurate decisions and requiring much more effort, for example, trying to solve a maths problem like 19x43=. Unfortunately, our brains are lazy and default to system 1 for most decisions. As you may have guessed, system 1 is highly prone to cognitive biases which means that a lot of the time we get things wrong. Since it takes six seconds to review the average CV, it’s safe to say that this is completed almost entirely by the system 1 process, opening it up to the many pitfalls that accompany this way of thinking. The CV screening stage is seen as the best way to reduce the number of applicants to a more manageable number, but having a poor and biased system at the start of the recruiting funnel means that everything downstream will be worse off. Some of the highest potential candidates aren’t hired. Huge amounts of money is spent training the wrong people and attrition levels increase as both the company and candidate realise that they are not a good match.

This video highlights system 1 vs system 2 brilliantly: Brain Tricks via Youtube

How we get things wrong:

Inconsistent decisions are described by behavioural economists as ‘noise’. They are characterised by variance in decisions. In recruiting this is apparent when one recruiter progresses a candidate, but another doesn’t because they have a different idea of what a strong CV looks like. ‘Noise’ in this case is error and the margin of error is surprisingly high. Artificial intelligence and machine learning algorithms are far fairer than human decision-making. Machines do not care what time of day it is, whether they saw the number 45, or if they already know a man called Tim. They simply make a decision based on the data they are given. The same job application will get a yes or a no without regard to any other information and the success criteria that they are looking for is always the same. This has huge consequences for making hiring decisions more transparent and accurate.

Bias makes our decision-making not only inaccurate, but unfair. There are numerous studies showing systematic bias against minority groups and women. Some of the reasons for this can be explained by unconscious bias on the recruiter’s part, something that is extremely difficult or perhaps even impossible to remove. Some can be explained by the different ways that CVs are written: Men are much more likely to exaggerate their achievements, making their profile look stronger than it is. Using a system that asks candidates to describe themselves instead of objectively measuring their behavioural traits often means that companies are basing their decisions on inaccurate information, and in turn hiring the best ‘self-promoters’ instead of the best ‘do-ers.’ Fortunately, by looking into methods to reduce bias and variance, we can begin to make more accurate and fair choices.

Bias vs Variance matrix:

Bias vs Variance matrix

Look at the graph above and imagine that top-potential candidates are the bullseye. A recruiting system’s job is to hit this bullseye as accurately and often as possible. You can see that higher the bias present, the more the blue dots (decisions) move in a direction away from the bullseye. High variance means that you’re getting some of the right people, but you’re not consistently getting those near the bullseye. Your decisions are scattered around the area you’re trying to hit because of an inconsistent approach.

1. The CV: High bias, high variance

High bias, high variance

Using a CV to hire candidates is like looking at the lower right target: your outcomes will be biased away from the top-performers due to mental shortcuts and the variance in outcomes due to multiple recruiters, as well as external factors like time of day. Generally, you cannot accurately describe why a CV was accepted or rejected.

2. Coin toss: Low bias, high variance

Low bias, high variance

Tossing a coin removes the human error from a decision, but unfortunately has very high variance. You’re not going to accurately find top-performers as you don’t consider any relevant information, but you will get lucky on the odd occasion.

Related: Is a coin toss more predictive than a CV?

3. Screening algorithm: High bias, low variance
E.g: traditional psychometric / AI assessment tool

High bias, low variance

Using algorithms is a great first step to making decisions more predictable, but if that algorithm does not take steps to test for, and correct, bias then it will lead to predictably bad decisions. Bias in this sense can come from:

  1. Low quality training data. Often these tests are in the form of questionnaires which rely on the person to present an honest and accurate assessment of themselves which is both difficult to do and opens the doors to conscious manipulation of results. Alongside this, the volume of data created is extremely low compared with direct measures such as games.
  2. Humans deciding on the success criteria of what makes a strong candidate (such as traditional competency frameworks), which bakes human bias into the algorithm, replicating this flawed decision process and perpetuating bias.
  3. Not correcting the model for demographic information such as race or gender which would lead to differing pass rates for different gender and ethnic group. This has some worrying implications for legal defensibility as well as diversity and inclusion initiatives.

4. Screening algorithm with bias removal: Low bias, low variance
E.g: pymetrics - AI powered assessment tool with AuditAI.

Low bias, low variance

An algorithm that has taken active steps to reduce human biases collects good quality data and has been audited to check for any demographic unfairness against race or gender will lead to more predictable and accurate outcomes. The ethnicity of candidates progressed will be representative of the group of applicants applying, meaning that candidates have a fair chance to reach later interview stages regardless of their background. Pymetrics are very proud of being the first company to open-source their code for detecting unfairness in the output of machine learning models or other assessments -AuditAI.

Introducing the smart algorithm

At the end of the day, all hiring processes boil down to the act of taking many candidates and finding the few that should be hired for that role. It should be clear now that human intervention at the first stage brings around many difficulties, not only in terms of bad choices, but also in terms of legal defensibility. If you cannot accurately describe why a candidate was rejected then how can you prove that your recruitment process is fair and not discriminatory against certain cohorts of people?

Great assessment tools satisfy the following 5 characteristics:

  1. Highly validated scientific framework behind the tests
  2. Collecting high quality, job relevant, objective data
  3. Multi-measure assessment, e.g personality + cognitive test
  4. Blind auditions: personal information has no impact on result
  5. Bias testing and correcting

Leading methods of screening base their decisions on objective data. In pymetrics’ case it is gameplay data of neuroscience-based games which extracts 77 cognitive, social, and emotional behavioural traits that highly correlate with job performance against a wide range of jobs. What determines a great candidate is based on an analysis of the company’s top-performers within that role, analysing which traits this group shares and how they differ from the general population. This makes the assessment highly predictive and relevant to the role. Lastly, and most importantly, the algorithms are tested for bias and then corrected to ensure equal opportunities for each candidate regardless of their background. This has the added benefit of increasing the model’s predictive power. Systems like this are a huge improvement because they are fully explainable. We can check a candidate’s traits against those of the top-performer’s, and say why they did or did not get passed through.

Effectiveness of tools predicting job-performance. Source: Harvard Business Review

One of the many benefits of this approach is that this form of assessment can turn a rejection into an opportunity for candidates. For clients, multiple models are built across different functions in the organisation. Someone who applies for a sales position but (based on their gameplay) doesn’t ‘fit’ the role might be rerouted to the marketing or operations roles. Furthermore, if there are no jobs that they match within the company, they can explore better suited jobs in other organisations that are also using pymetrics. We call this the common application.

Results

The only good measure of a recruiting system is through predictive validity testing which determines whether a candidate would be hired again. Early studies show that using pymetrics as a first filter has led to reducing 1st year attrition rates by 30–60%, increased sales by 33%, and increased diversity by 100% across gender, ethnicity, and socio-economic background. Among the other advantages are a reduced time-to-hire, better candidate experience, and increased hiring yield because better suited candidates reach the final interview stages.

AI is often seen as a scary topic, especially when it comes to making hiring decisions. What we need to recognise is that we as humans are highly irrational and remarkably poor decision makers - particularly at the early stages of the recruitment process. Algorithms have the potential to make this far more efficient, effective, and engaging for the candidate. We can start moving towards a fairer hiring market where people are judged based on their potential instead of their pedigree. Rejections are no longer a door in the face but instead offer unbounded opportunity. It’s time that we start to recognise our own faults and realise how we can fix them through the introduction of responsibly designed technology.

Want to hear more about pymetrics? Check out the website here: Pymetrics.com

**Disclaimer: No-one is perfect, and I realise the irony that this article is biased towards pymetrics! I’ll let you draw your own conclusions by assessing the different methodologies of other recruiting platforms. It’s just a shame that AI cannot provide an impartial analysis!

--

--

Daniel Byrne

Behavioural economics / Artificial intelligence / Psychology