Automated Resume Screening: Robots messing our fate

As ground breaking advancements in technology continue to minimize human efforts for everyday tasks, it is not surprising that businesses are trying to automate their workflow as much as possible to cut down costs and boost their efficiency. However, it is important to consider that automation might not be the best idea for certain purposes: for example, the hiring process. Do you think insensitive robots should be allowed to decide our fates? I think allowing robots to decide our fate during the hiring process is unfair for the applicants and leads to many ethical issues which have dire consequences.

Firstly, it is essential to understand what Automated Resume Screening (ARS) actually is. ARS refers to the use of machine learning algorithms and artificial intelligence (AI) to parse and extract information from applicant resumes. This technology can greatly save companies and recruiters time and other valuable resources while finding the most appropriate hires from amongst big pools of candidates. Different algorithms are used to identify relevant skills and experience for the job from the applicant’s resume using keywords in the job description and requirements. Variations of algorithms can be used according to the company’s hiring criteria. Certain companies may use algorithms that just do word matching between the job post and the resume, while some may place weights on certain past experiences and backgrounds. But as with all useful things in the world, the use of automated resume screening has a catch: it can stimulate bias against underprivileged applicants with lesser experience and relevant job backgrounds resulting in gender bias, racial bias, and circumstantial bias. Lets talk about what these biases actually are and how they are caused.

Steps in the hiring process using Automated Resume Screening.

We see that machine learning and artificial algorithms have grown really powerful and efficient over the years but they are still not a hundred percent accurate, efficient, and perfect in making decisions. These algorithms are stochastic, not deterministic¹. They aim to learn and draw out patterns from past successes and failures. For example, the algorithms learn patterns in attributes of successful applicants and treat them as positives while looking at new applicants. At the same time, they learn attributes of unsuccessful applicants and treat them as negatives while looking at new applicants. This results in a bias towards certain qualities and skills for who is going to get hired. This may seem ideal for companies at the first glance: companies only get employees that will be beneficial for them or similar to the employees in the past that have been beneficial for them. However, this is where understanding the concept of data feminism is important. Data feminism encourages data scientists to understand essential ethical concepts like power, inequality, gender, and race while considering the development and uses of their work². Sometimes, these algorithms pick up correct but misleading patterns (correlation is not causation³) which result in ethical implications against many groups and subjects including genders, minorities, and applicants’ circumstances. The same problems result if the data of past applicants for a certain company or job posting have undesired skills and qualifications. The efficient algorithms can still learn patterns and trends which pick out the undesirable skills and qualities. This is bad for both companies (they get inappropriate hires) and the applicants (lesser qualified applicants get chosen over the others).

Gender bias refers to the tendency to prefer one gender over another. It is a form of unconscious bias, or implicit bias, which occurs when one individual unconsciously attributes certain attitudes and stereotypes to another person or group of people. In context of job applications, gender bias implies discriminating applicants based on their gender rather than their skills and abilities. Learning past applicants and evaluating new applicants based on that knowledge can lead to a gender bias. If a company or specific job role has been previously, or stereotypically, been dominated by a specific gender, the algorithms may learn that as a trend and hence choose future applicants biased for that gender. A real world example of this is illustrated by Amazon’s recent experience with its AI recruiting tool. Being a men dominated tech company, at a time when tech was in turn dominated by men, Amazon’s AI recruiting tool which it had been developing for past 4 years learnt to prefer male applicants over female applicants²! Therefore, it flagged keywords like “women’s club” and “president women’s chess club” as negatives where they would be otherwise preferred as positives as they indicate leadership positions. Therefore, this led to a bias against the female gender. At a time when marriage rate is declining and more single women are becoming homeowners, such biases, be it intentional or unintentional, can have severe impacts on the financial conditions of many households.

Ethnic bias can be defined as engaging in discriminatory behavior, holding negative attitudes toward, or otherwise having less favorable reactions toward people based on their ethnicity. It is important to note that any attempt at race classification and therefore of race definition can at best be only an approximation, for no scientific system of race classification has as yet been devised by man. In the final analysis the legislature is attempting to define the indefinable. In context of job applications, these ideas imply preferring or discriminating between applicants based on their race or ethic backgrounds and not their skills and qualifications and that too based on an approximating non-determinate criteria. None of this sentence seems fair or promising. Companies may not have diverse employee racial and ethnic backgrounds. Anywhere throughout the world, countries, cities, and neighborhoods tend to have people with similar ethnicity with small populations of minorities. Using machine learning algorithms to analyze past applicants then results in a bias against the racial minorities as past employees may not have those ethnicities. This in turn results in consequential problems to racial minorities’ households as they may suffer from unemployment more than other ethnicities while also being the ones in more need. Therefore, using automated resume screening can result in an ethnic bias. While the screening process can result in a lack of opportunities for racial minorities, it can create more opportunities for them in only some specific roles which can have ethical implications. Many companies and applicant tracking systems partner with social media companies like Facebook. Facebook uses the similarity between social media profiles to recommend job similar job postings liked or worked by similar social media profiles. A team led by Muhammad Ali and Piotr Sapiezynski at Northeastern University found out that job postings for preschool teachers and secretaries, for example, were shown to a higher fraction of women. In contrast, algorithms showed postings for janitors and taxi drivers to a higher proportion of minorities. This dilemma increases the discrepancy in salaries of different racial and ethnic groups which in turn leads to more superior lifestyle of ethnic majorities and more inferior lifestyle of ethnic minorities.

Circumstantial bias refers to discriminating behavior to a group based on their circumstance. In context of job applications, circumstantial bias can be thought of as a bias against applicants based on lack of opportunities and experience of applications when they may have the same potential and intellectual potential as other successful applicants. While filtering out applicants based on a lack of qualifications and skills, the automated resume screening process doesn’t evaluate the applicant’s learning potential. An applicant may not be able to have received the same opportunities to earn experience or learn skills as other successful applicants. Therefore, they will be screened out early on in the recruitment process. This can have many ethical implications. Applicants with households with poor financial conditions get affected by unemployment the most severely and need a job to improve their financial conditions the most urgently. However, the screening process creates a reverse loop and leads to unemployment affecting already unemployment people more. Moreover, this also affects racial minorities who did not get the chance to earn the same skills and qualifications as other applicants due to lack of eligibility and resources rather than lack of aptitude. In 2017, there were only 4 African American CEOs (all men) of Fortune 500 companies, accounting for 2% of the list¹⁰. This results in racial minorities getting filtered early on due to their circumstances rather than their aptitude.

Other than these biases, another problem of using ARS for the hiring process is the ability for applicants to fast track their way through the resume screening by falsifying their resume and inserting keywords as skills when they actually lack those skills. Applicants can learn what skills and qualifications the company is looking for from other successful hires and can falsely add that information into their resumes to make it through the resume screening process. Working on resumes to shape them to pass the resume screening can result in less qualified applicants overtaking more qualified applicants in the job recruitment process. This also introduces favor to applicants who don’t value ethics as much as others and those applicants who have contacts with the company employees and have a higher social outreach. Both of these categories lead to unfair hiring and worsen the social and wealth discrepancy in the different populations in the society.

As an international student majoring in Computer Science studying at the University of Michigan, I can personally share my experience of robots messing my fate. Since computer science was not offered in my high school, I did not have any relevant skills or cool tech side projects in my freshman and sophomore years as compared to my peers who had taken AP Computer Science and had a bunch of project teams in their high schools. Although this disparity didn’t say anything about my potential as a computer scientist, the disparity in the contents of our resumes was enough to rule me out of the hiring process even before I could get a chance to interview with the recruiters! Therefore, the robots failed to consider my circumstances and aptitude and reject me from the job roles. To add to this, several of my peers forged their resumes by adding skills and projects they never learnt or completed, and were able to secure interviews (and sometimes even jobs). Being an ethical student who values morals, I never got the courage to take such steps and hence, my unethical peers were still able to overtake me in the job hunt process. This clearly reflects how insensitive robots can easily and unreasonably mess your fate.

Keeping the consequences discussed above in mind, it is important for companies to constantly revise and reconsider the infrastructure and results of their hiring process. Companies should aim to strike a balance between human and automated review of applications so that applicant’s backgrounds other than just skills and work experiences are given weight and taken into account to provide a more holistic review. They should also constantly evaluate and analyze the diversity and performance of their hires to detect any biases or undesirable trends early on and get the chance to fix those detected issues. While, as with any other technology in the world, perfection and absolute fairness is very hard to achieve, the companies should try their best and beyond to minimize any shortcomings or biases to maintain a fair hiring process for applicants from all backgrounds and needs. If all companies make sincere efforts in this direction, the hiring process can be morphed out of the entire hands of programmed robots and so help avoid ethical issues and their alarming consequences for struggling unemployed members of the society.

References:

[1] The Limitations of Machine Learning. Towards Data Science. https://towardsdatascience.com/the-limitations-of-machine-learning-a00e0c3040c6

[2] D’ignazio, C., & Klein, L. F. (2020). Data feminism. MIT press. Chapter 1

[3] Correlation is not causation. The Guardian. https://www.theguardian.com/science/blog/2012/jan/06/correlation-causation

[4] What Is Gender Bias in the Workplace? Built In. https://builtin.com/diversity-inclusion/gender-bias-in-the-workplace

[5] More Women Have Become Homeowners and Heads of Household. Could the Pandemic Undo That Progress? Urban Institute https://www.urban.org/urban-wire/more-women-have-become-homeowners-and-heads-household-could-pandemic-undo-progress

[6] Measuring Ethnic Bias: APA Dictionary of Psychology American Psychological Association. https://dictionary.apa.org/ethnic-bias

[7] In Bowker, G. and Star, S. L. (1999). Sorting Things Out: Classification and its Consequences. Cambridge, MA: MIT Press.

[8] Facebook Page Admins and Jobs Distribution/ATS Partners Facebook. https://www.facebook.com/business/help/286846482350955

[9] Hao, Karen. “Facebook’s Ad-Serving Algorithm Discriminates by Gender and Race.” MIT Technology Review, 2 Apr. 2020, https://www.technologyreview.com/2019/04/05/1175/facebook-algorithm-discriminates-ai-bias

[10] Bart Turczynski. 2021 HR Statistics: Job Search, Hiring, Recruiting & Interviews. Zety. https://zety.com/blog/hr-statistics#resume-statistics

--

--