Ableism in AI: Facial Recognition Technologies in Recruitment Processes, the Lack of Governance and Account for the Multiplicity of Disability.

Humans For AI
5 min readFeb 18, 2023

--

By: Tess Buckley, AI Ethics Senior Analyst at EthicsGrade

For the HFAI blog and in tribute to the work of AboutFace.ca, translated from an academic case study completed in 2021 for the purpose of explainability.

Artificial Intelligence (AI) is shaping the way in which business is conducted, and while helpful, it can also be harmful. Researchers have been working to address ethical issues such as racial and gender bias, but there is an underrepresentation of disabled individuals as well.

Let’s make a distinction between fairness to individuals with disabilities from issues concerning other protected attributes such as race and gender. Disability is fluid and extensive in its number of physical and mental health conditions, which can fluctuate throughout an individual’s life. Each diagnosis has unique histories and idiosyncrasies, making them common in relation to the general population. Disability information is also sensitive and often individuals are understandably reluctant to reveal their individual diagnosis. The category of ‘disability’ complicates pat classifications, and thus perturbs calls to simply include disabled people in datasets, which are constructed around rigid models of categorization.

The disabled community is negatively impacted by the lack of data that is intrinsic to their conditions. This issue of coverage is a genuine concern when applying AI systems to people with disabilities. When a given technology is unethical, a consistent response to mitigate algorithmic bias is to train the ML on more diverse datasets. In the case of disability, simply expanding a dataset’s parameters to include new categories, in an attempt to account for differences, will not work to ensure this group is equitably represented.

AI has found a novel application in the field of recruitment: HireVue’s recruitment Facial Recognition Technology (FRT) is increasingly used for candidate screening, particularly with the rise of video interviews during the pandemic in both the public and private sectors. HireVue’s platform has hosted over 19 million video interviews for over 700 global customers. HireVue is often used by high-volume employers to automate screening at the beginning of the hiring process. Unilever, Intel and JP Morgan are a few firms using AI to improve ‘efficiency’ in recruitment.

The AI attempts to predict how a candidate will perform by analyzing the interviewees’ gestures, facial expressions, poses, lean, voice tone and cadence. This process produces an ‘employability score,’ which employers then use to decide who advances in the process. Unilever’s average recruitment time was cut by 75% after implementation; hiring teams are enticed to use FRTs because it reduces employee involvement in hiring processes.

There are benefits, however, this expedited process is not worth the ethical implications that arise from its use. The use of FRT is deeply concerning as the systems are based on discriminatory and discredited science that is fundamentally inconsistent with human rights. Consequently, the potential convenience of FRTs must be weighed against the resulting concerns of accuracy, privacy, and ableism.

HireVue’s FRT system analyzes facial movements and assumes they are linked to the candidate’s emotions. This results in making psychological inferences on an individual’s ability to succeed in a role based on their facial data. When FRT is used in the interview process an individual with a facial difference is likely to generate lower scores. This risks employers violating obligations imposed under human rights and equality legislation.

Integrating disability into the AI ethics conversation helps illuminate the tension between AI systems’ reliance on data as a primary means of representing the world, and its lack of ability to capture human fluidity of identity and lived experience. Without diverse training, an AI system would not be able to learn any characteristics demonstrated by individuals with facial differences who were later successful in employment.

To those in industry and tech development: Action Steps

Moving forward, companies that use AI during the hiring process should be required to release detailed outsourced bias audit reports. Furthermore, the Equal Employment Opportunity Commission (EEOC) should review these systems and issue guidance on whether they violate the Disabilities Act. When developing and deploying FRT employers need to consider the relevant risk, desired outcome, and necessity of this intrusive method. Employment courts and national data protection authorities are likely to punish excessive, inappropriate or unnecessary applications of FRT.

The more general challenge we must ask of the AI community is how to handle outliers. Machines won’t solve the problems we ourselves cannot, but they can learn norms and optimize for those norms. ML judges individuals by mathematical comparison to learned data points, even when they may have never encountered someone like you and this is a fundamental limitation in fair treatment of individuals with facial differences and disabilities.

As industries continue to engage with FRT we must not deny that we are deeply shaped by technology, which calls for the same democratic, citizen-based input granted to other societal issues. The application of FRT in recruitment processes does not properly represent the talents of the disabled community, making it ableist. To mitigate bias in AI it must be trained on diverse datasets, but we are not able to create one that would account for the multiplicity of disabilities. Facial recognition technology cannot adequately be trained to account for the fluidity of difference in the human condition, specifically facial differences— there simply is not a consistent way to be different.

Learn more

Creed, Stephanie. “Facial Recognition Technology in Employment: What You Need to Know.” Bird & Bird, Nov. 2020, https://www.twobirds.com/en/news/articles/2020/ global/ facial-recognition-technology-in-employment.

Davis, Nichola. “Scientists Create Online Games to Show Risks of Ai Emotion Recognition.” The Guardian, Guardian News and Media, 4 Apr. 2021, https:// www.theguardian.com/technology/2021/apr/04/online-games-ai-emotion-recognition- emojify? utm_source=dlvr.it&utm_medium=facebook&fbclid=IwAR02ZiERnkRhILwEHngw lWL8BN8XlPYFCGxi7UkyCbuudKp6N4w9yIVQ12Y.

Engler, Alex. “For Some Employment Algorithms, Disability Discrimination by Default.” Brookings, Brookings, 31 Oct. 2019, https://www.brookings.edu/blog/techtank/ 2019/10/31/for-some-employment-algorithms-disability-discrimination-by-default/.

Harwell, Drew. “A Face-Scanning Algorithm Increasingly Decides Whether You Deserve the Job.” The Washington Post, WP Company, 6 Nov. 2019, https:// www.washingtonpost.com/technology/2019/10/22/ai-hiring-face-scanning- algorithm- increasingly-decides-whether-you-deserve-job/.

Hao, Karen. “Can You Make an AI That Isn’t Ableist?” MIT Technology Review, MIT Technology Review, 2 Apr. 2020, https://www.technologyreview.com/2018/11/28/1797/can-you-make-an-ai-that-isnt-ableist/.

“HireVue Hiring Platform: Video Interviews, Assessment, Scheduling, AI, Chatbot.” Hirevue.com, 2021, https://www.hirevue.com/.

Maurer, Roy. “Hirevue Discontinues Facial Analysis Screening.” SHRM, SHRM, 3 Feb. 2021, https://www.shrm.org/resourcesandtools/hr-topics/talent-acquisition/pages/ hirevue-discontinues-facial-analysis-screening.aspx.

Obando-Chacon, Gabriel, et al. “Facial Recognition & Recruitment.” Institute for Internet and the Just Society, 27 Aug. 2021, https://www.internetjustsociety.org/ cosmonaut/ facial-recognition-recruitment.

Whittaker, Meredith. “Disability, Bias, and AI.” AI Now Institute at NYU, Nov. 2019, https://doi.org/https://ainowinstitute.org/disabilitybiasai-2019.pdf.

--

--