Automated hiring systems, punitive damages, and illegal pre-employment medical examinations

Roland Behm
10 min readJun 7, 2022

--

The face of a dark-haired woman on the right half of the picture looking through what appears to be a transparent surface with marks that frame the corners of her face and various measures (e.g., voice analysis readout) on the left half of the transparent surface (and picture).

Job applicants with disabilities who are judged by automated hiring systems and found wanting, deserve a better explanation than, “The computer said so.” Computers say so for the wrong reasons, and it is the legal duty of employers to ensure they do not.

“Algorithmic tools should not stand as a barrier for people with disabilities seeking access to jobs,” stated Assistant Attorney General Kristen Clarke for the Justice Department’s Civil Rights Division.

Automated hiring systems build on the online applicant tracking systems (ATS) created as hiring moved to a primarily Internet-based system in the late 1990s. Online postings of employment opportunities led to a significant increase in the number of job applicants. ATSs were developed in order to provide order and accountability and to lower hiring costs, as employers were unwilling to scale up headcount in their hiring departments in what was a traditionally labor-intensive process. It wasn’t long before screening tools were added to the ATS, like online personality assessments.

The Americans with Disabilities Act (ADA) is technology agnostic. It contains a flat prohibition on any pre-employment medical examinations. Put aside any thought that an illegal pre-employment medical examination must be done in a clinical setting by persons in white coats. The term “medical examination” for purposes of the ADA was first defined by the EEOC guidance in 1995, as being “a procedure or test that seeks information about an individual’s physical or mental impairments or health.”

If an automated hiring system used by an employer is found to be an illegal pre-employment medical examination, all job applicants — not just those with disabilities — have claims against the employer. Some employers use their automated hiring systems to screen millions of applicants each year. Consequently, each of those applicants can file a claim alleging that the employer’s system is an illegal pre-employment medical examination.

Congress enacted the ADA to “provide a clear and comprehensive national mandate for the elimination of discrimination against individuals with disabilities.” 42 U.S.C. § 12101(b)(1). Congress recognized that “the Nation’s proper goals regarding individuals with disabilities are to assure equality of opportunity, full participation, independent living, and economic self-sufficiency for such individuals.” 42 U.S.C. § 12101(a)(8).

And yet, thirty years after the enactment of the ADA, only 19% of people with disabilities were employed, as compared to an employment rate of 66% among people without disabilities, and nearly a third of the 19% had only part-time employment.

It’s not that persons with disabilities do not want to work. They do. The shamefully low employment and shamefully high underemployment rates are due in part to employers’ use of automated hiring systems that, according to the Department of Justice Civil Rights Division, are “essentially turbocharging the way in which employers can discriminate against people who may otherwise be fully qualified for the positions that they’re seeking.”

AI, algorithms, and the ADA

The term “artificial intelligence” (AI) refers to systems that use machine learning (ML) algorithms that can analyze large volumes of training data to identify correlations, patterns, and other metadata that can be used to develop a model (e.g., job applicant screening systems) that can make predictions or recommendations (e.g., interview/no interview) based on future data inputs (e.g., visual, verbal, and written responses of job applicants).

ML algorithms are programs (math and logic) that adjust their performance as they are exposed to more data. The “learning” part of ML means that those programs change how they process data over time, frequently in ways that are not transparent even to their programmers.

One of “[t]he most common ways in which an employer’s use of algorithmic decision-making tools could violate the ADA” is when it “adopts an algorithmic decision-making tool for use with its job applicants or employees that violates the ADA’s restrictions on disability-related inquiries and medical examinations.” “The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees, EEOC (May 12, 2022)

An employer cannot avoid liability by claiming the automated hiring system was operated by a third party. Under the ADA, the employer remains liable. “An employer . . . may not do through a contractual or other relationship what it is prohibited from doing directly.” 29 C.F.R. §1630.6.

As noted, if an employer’s automated hiring system is determined to be an illegal medical examination, every job applicant processed by that system has a potential claim against the employer. And an applicant does not need to have a disability to have a claim.

Job applicants may seek punitive damages and damages for pain and suffering under the ADA, including for a violation of the prohibition on pre-employment medical examinations

The Civil Rights Act of 1991 caps the total amount of compensatory and punitive damages available to a job applicant depending on the number of employees at the company being sued. The cap ranges from $50,000 to $300,000 per applicant.

Illegal pre-employment medical examinations

Title I of the ADA is unique among civil rights laws because it strictly prohibits all pre-job offer medical examinations (42 U.S.C. §12112 (d)(2)). Put aside any thought that an illegal pre-employment medical examination must be done in a clinical setting by persons in white coats.

The term “medical examination” for purposes of the ADA was first defined by the EEOC guidance in 1995, as being “a procedure or test that seeks information about an individual’s physical or mental impairments or health.”

EEOC guidance lists seven factors to be considered when determining if a job applicant screening tool like an automated hiring system offered by HireVue, Inc. is a medical examination:

1. whether the test is administered by a health care professional;
2. whether the test is interpreted by a health care professional;
3. whether the test is designed to reveal an impairment of physical or mental health;
4. whether the test is invasive;
5. whether the test measures an employee’s performance of a task or measures their physiological responses to performing the task;
6. whether the test normally is given in a medical setting; and
7. whether medical equipment is used.

Two Factors

According to EEOC guidance and judicial decisions, any one of the seven factors may be enough to support a finding that an automated hiring system is an illegal pre-employment medical examination.

Automated hiring systems implicate two of the seven factors listed in EEOC guidance: (1) whether the system is designed to reveal an impairment of physical or mental health, and (2) whether the system measures an employee’s performance of a task or measures their physiological responses to performing the task.

The EEOC guidance provides that automated hiring systems are medical examinations if they provide evidence that would lead to identifying a mental disorder or impairment, including those listed in the most recent version of the Diagnostic and Statistical Manual for Mental Illness (DSM).

Note the specificity of the words “provide evidence that would lead to identifying a mental disorder or impairment.” The assessment does not need to diagnose or even disclose the presence of a mental disorder

Intent Is Irrelevant

The Seventh Circuit Court of Appeals held in Karraker v. Rent-a-Center, Inc., that the assessment at issue was an illegal pre-employment medical examination. The court found that even if an employer did not use or intend to use the assessment to remove applicants with mental disorders from consideration, the use of the assessment likely had that effect, resulting in a violation of the ADA.

The Karraker court found that a bad score on the assessment at issue did not necessarily mean that a person had a mental disorder, but a person who has a mental disorder is likely to score badly on the assessment and lose out on the job opportunity. The Karraker decision is cited with approval by the Sixth Circuit in Kroll v. White Lake Ambulance Authority.

While employers using automated hiring systems may not intend to use them to eliminate job applicants with disabilities — e.g., those with communications disorders (e.g., stuttering), personality disorders, neurodevelopmental disorders, Parkinson’s, and traumatic brain injury (TBI), their intent is irrelevant. Job applicants with such disabilities are likely to rejected for employment consideration by those automated hiring systems.

Designed to reveal impairments

The character of the training data used to develop algorithms has meaningful consequences for the lessons that data mining “learns” because discriminatory training data leads to discriminatory models. Training data “often reflect patterns of inequity that exist in the world, and yet the data-driven nature of AI systems often serves to obscure the technology’s limitations within a pervasive rhetoric of objectivity.” (See “The Case for Interpretive Techniques in Machine Learning,” in “Fake AI,” edited by Frederike Kaltheuner (Meatspace Press 2021)).

Shari Trewin, with IBM Accessibility Research, writes that the probabilistic and statistical nature of ML algorithms intrinsically place persons with disabilities at a disadvantage, as “outlier data [like that of persons with communications and speech disorders, mental illness, or neurodevelopmental disorders] is often treated as ‘noise’ and disregarded,” and a lack of individuals with a given kind of disability [in training data] prevents algorithms from finding patterns.

Communications and Speech Disorders

According to the Department of Justice:

Even where an employer does not mean to discriminate, its use of a hiring technology may still lead to unlawful discrimination. For example, some hiring technologies try to predict who will be a good employee by comparing applicants to current successful employees. Because people with disabilities have historically been excluded from many jobs and may not be a part of the employer’s current staff, this may result in discrimination. Employers must carefully evaluate the information used to build their hiring technologies.Persons with disabilities are at a severe disadvantage for getting hired if their facial affect, voice, tone, and writing do not manifest in the same way as the persons used to create the training data for the ML algorithms.

More than 1 in 10 adults in the U.S. have disabilities that affect their ability to speak and write, including those with communications disorders, a category of mental health disorders listed in the DSM — 5-TR, the most recent version of the DSM.

There are four main types of Communication Disorders listed in DSM-5-TR, including Child Onset Fluency Disorders (COFD). Persons with COFD, like President Biden, Samuel L. Jackson, and Tiger Woods, experience a disruption in the natural flow of language, more often known as a stutter. COFD will manifest itself in repetition or prolongation of speech.

Lisps are a common type of speech disorder that can persist into adult years. Persons who lisp, like Barbara Walters, Sean Connery, and Michael Phelps, have difficulty learning to make a specific speech sound, or a few specific speech sounds.

Aphasia is a language disorder that affects a person’s ability to express and understand written and spoken language. It can occur suddenly after a traumatic brain injury, like Congresswoman Gabby Giffords gunshot wound, or develop slowly from a growing brain tumor or disease.

ML algorithms for automated hiring systems are designed to reveal job applicants with communications and speech disorders, like stuttering, lisps, and aphasia. The data gathered on these applicants by the automated systems is treated as outlier data, since such data is absent from the training data for the ML algorithms. Consequently, these applicants will be at a significant disadvantage in their attempt to obtain employment.

Mental Illness

According to Hirevue, its automated hiring systems create a personality profile of a job applicant by comparing the candidate’s tone of voice, word clusters, and micro facial expressions with people who have previously been identified as high performers on the job. The HireVue systems use the Five-Factor Model (FFM) of personality to create their profiles of job applicants.

The FFM describes personality in terms of five broad factors: Openness: inventive and curious vs. consistent and cautious; Conscientiousness: efficient and organized vs. easy-going and careless; Extraversion: outgoing and energetic vs. solitary and reserved; Agreeableness: friendly and compassionate vs. cold and unkind; and, Neuroticism: sensitive and nervous vs. secure and confident

The majority of personality disorders, including those listed in the DSM-5-TR, are characterized by higher levels of neuroticism and lower levels of extraversion. Consequently, HireVue’s automated hiring system is designed to reveal persons with disabilities.

While every job applicant with a higher level of neuroticism and lower level of extraversion may not have a personality disorder, as the Karraker court held, a person who does have a personality disorder will be included in the “higher neuroticism/lower extraversion” groups and will be screened out from employment consideration.

Neurodevelopmental Disorders

In a study published in The Lancet journal EClinicalMedicine, IBM researchers trained artificial intelligence to pick up hints of changes in language ahead of the onset of neurological diseases using the same tools (ML and NLP) employed by HireVue in its automated hiring systems.

IBM researchers examined persons’ word usage with an AI program that looked for subtle differences in language. It identified one group of people who were more repetitive in their word usage at that earlier time when all of them were cognitively normal. These subjects also made errors, such as spelling words wrongly or inappropriately capitalizing them, and they used telegraphic language, meaning language that has a simple grammatical structure and is missing subjects and words like “the,” “is” and “are.” The members of that group turned out to be the people who developed Alzheimer’s disease.

Telegraphic language can manifest in adults with disabilities under the ADA, including persons with multiple sclerosis and persons with autism. Telegraphic language in adults may also indicate aphasia and it is a speech pattern that can also be seen in patients with schizophrenia.

The IBM AI program predicted, with 75 percent accuracy, who would get Alzheimer’s disease. Researchers will be extending the Alzheimer’s work to find subtle changes in language use by people with no obvious symptoms but who will go on to develop neurological diseases considered disabilities under the ADA, including Parkinson’s, frontotemporal dementia, bipolar disease, and schizophrenia.

Once again, HireVue’s automated hiring systems are designed to reveal applicant disabilities in violation of the ADA.

Measuring task performance or physiological response?

EEOC guidance states that “if an employer measures an applicant’s physiological or biological responses to performance [of a task], the test would be medical.” Physiological responses are the body’s automatic reactions to a stimulus.

Elements of HireVue’s automatic hiring systems are purpose-built to measure physiological responses, including brow furrowing, brow raising, the amount eyes widen or close, lip tightening, chin raising, smiling, vocal intonations, and speech patterns. As noted by HireVue’s Chief Industrial-Organizational Psychologist, the standard HireVue assessment includes half a dozen questions and yields up to 500,000 data points, all of which become ingredients in a job applicant’s calculated score.

HireVue’s systems measure job applicants’ physiological responses, likely making them illegal pre-offer medical examinations because they illegally “screen out” applicants with disabilities because under the ADA. Employers must ensure that any hiring tools measure only the relevant skills and abilities of an applicant, rather than reflecting the applicant’s impaired sensory, manual, or speaking skills. If employers fail to meet their obligations in selecting automated hiring systems, they face the risk of significant financial and reputational risk.

--

--