The Robot Will See You in the Conference Room: Will Future Hiring Practices Eliminate or Perpetuate Bias?

New Media Advocacy Project
The Tilt
Published in
5 min readJun 29, 2018

--

by Amy Bergen

The “unconscious” part of unconscious bias — the stereotypes and preferences that shape our decisions without our knowledge — gives the condition its peculiarity. It’s defined by sufferers’ insistence they don’t have it. This bias can be especially dangerous in the hands of someone with power, such as the power to decide who gets a job. Employment discrimination is illegal, but still rampant in hiring practices across professions. It’s just gotten sneaky.

In a 2004 study, economists sent fake resumes to a variety of companies. The resumes were similar in content, but some listed the candidates’ first names as Emily or Greg, some as Lakisha or Jamal. Emily and Greg got twice as many calls as their counterparts.

Hiring decisions that don’t contribute to systemic oppression can still favor candidates for reasons that seem arbitrary. Faced with two or more job seekers, the interviewer may pick the candidate who attended their alma mater, made small talk about a shared hobby, or simply feels like the right fit.

In general, we don’t expand our networking comfort zones much.

Image from pymetrics

We’ve probably all heard getting a job depends on who you know (this is the premise of the networking site LinkedIn). The truth is, we tend to gravitate toward and hire people similar to ourselves. It’s possible that some of the white hiring managers in the 2004 study weren’t consciously favoring names they associated with white, not black, candidates. But they may have known an Emily or a Greg already.

Familiarity means stability and upholding the status quo. It lets bias flourish. The people on top of oppressive power structures stay on top, and job seekers and organizations miss out on the best mutual fits.

What’s the solution? Anti-bias training and affirmative action policies have gone a long way. But many companies see a faster way to defeat our subliminal prejudices: have computers hire for us.

Artificial intelligence (AI) resume screening started — where else? — in the technology industry, which is publicly confronting its own diversity problem. The start-up pymetrics estimates women and certain racial minority groups— black and Latinx candidates among them—face a 50 to 67 percent disadvantage in tech hiring.

While humans judge based on factors they aren’t aware of, an AI focuses on the facts. A rudimentary form of computer screening — filtering of resumes by keyword phrases — is already common. Newer programs drop names and other identifying information from resumes. The programs discard data known to advantage or disadvantage a candidate, like employment gaps or names of educational institutions.

The AI sees nothing but relevant work experience.

Several companies are training machine learning algorithms to make objective picks, and hopefully diversify the organizations using the technology. There’s pymetrics, which uses “neuroscience games” to identify traits leading to success in an organization. There’s Mya, the chatbot programmed to ask candidates questions about job performance, sight unseen. There’s HireVue, contracted by corporations like Nike and Intel to assess video interviews for a range of data without bias. There’s Textio, the platform that examines and edits job listings for gendered phrasing and reportedly gave a client a 40 percent leap in female candidates. There’s Ideal, which uses tech to give candidates a “report card” and percentile score.

Some software tackles bias at its root. Blendoor, founded by entrepreneur Stephanie Lampkin, uses algorithms to present objective data to recruiters. Meanwhile, the algorithms collect demographic stats on candidates to identify where favoritism may still come into play in the hiring process. Lampkin is a woman of color committed to holding her industry accountable.

There’s one catch to AI hiring algorithms, however. They’re programmed by humans.

An AI learns what to do by being taught. Humans feed data to the program, and the program interprets this data and makes decisions. What if the data itself is skewed? Many of us view data as a neutral string of statistics. Data — in the way it’s collected, assembled, and interpreted — more closely resembles a story told by humans, full of our preferences and blind spots.

Many scientists and programmers do audit their data for signs of bias. Still the algorithm’s creators don’t operate in a vacuum independent of their own life experience. Any human team would encode its own thought patterns, probably without knowing it, into the information it feeds an AI.

Facial recognition algorithms developed in East Asian countries, for instance, became better at recognizing faces with East Asian features. A similar algorithm developed in primarily Caucasian countries recognized Caucasian features more easily. This bias may play out in AI screening of video interviews if the AI prefers some faces to others.

Written resumes could run into similar problems. Words and language are never neutral, and come encoded with assumptions we may not know we have.

Could an AI resume reader display a preference for jobs associated with a certain economic background? Could it eliminate candidates based on phrasing used in their resume?

Not only is the tech industry still overwhelmingly male and white, it is, like many industries, insular and full of in-group signals. Might its data favor the jobs, companies, and other status symbols preferred by those “in the know” rather than favoring signs of objective competence?

Unlike humans, algorithms can’t think critically about their own bias.

An AI programmed to favor a certain type of candidate will continue to do so without self-reflection, and on a much larger scale. The ruthless efficiency of an AI means it can parse resumes more quickly than humans ever could. A biased hiring algorithm may result in not dozens, but hundreds and thousands of candidates favored or rejected at once — increasing the bias the program was meant to eliminate.

The best approach for now seems to combine algorithmic aid and human oversight. An AI might cull the candidate list but humans make the final hire. And humans must still do the heavy lifting of considering diversity and representation, beginning with the first datasets they create. The process of unlearning old assumptions is a difficult, ongoing one. But unlike an AI, we can rethink the data imprinted in our brains.

Algorithmic hiring gives us a chance to change our own mindsets, but only if we face the unpleasant truth of our unconscious. Technology often holds up a mirror to society, a “black mirror” as the hit British TV series about tech horrors says. In this case, AI may teach us things about ourselves we’d rather not know.

Amy Bergen is a writer living in Portland, Maine by way of the Midwest. Her first short fiction collection is forthcoming from Magic Helicopter Press. Follow her on Twitter at @trespassers_w.

--

--

New Media Advocacy Project
The Tilt

Dispatches from the world of human & environmental rights narrative change