Facial recognition technologies are becoming mainstream

Should we be worried?

Aroshi Ghosh
Student Spectator
6 min readSep 13, 2020

--

Credits: Claire Merchlinsky from The New York Times

Bias is integral to human nature. But, when bias is introduced by technology, its various manifestations can be even more complex and hard to understand.

Think about this situation for a minute…..

You are a young college student from one of the most prestigious universities in the country. You receive an invitation for a preliminary interview for an internship with your dream company. You are requested to use a software called HireVue, where you will not interface with a real person, but respond to a couple of questions that the computer randomly asks you.

On the day of the interview:

You have dressed professionally. Check!

You have adjusted your webcam. Check!

You have shooed the cat out of your room. Check!

Now, you are prepared to talk to the computer. You are unfamiliar with the software and do not know when to hit pause or record, when to start speaking, or how to position your face to catch your best angle. You think your nose appears too large on the screen or your hair is standing up. By the time you figure it out, you are already nervous but you respond to the questions to the best of your ability and focus on the content of your answers like the trooper you are. You think it went well and wait for the call that never comes.

Interview with a computer

Some of you may wonder: How is this different from an in-person face to face interview? Maybe the problem was in the preparation or the delivery.

After all, the video footage of you talking to a computer was evaluated by AI and facial recognition algorithms. If you had proved to be a match, you would inevitably have been contacted for the next round. But it is not as simple as it seems…

As this Washington Post article points out, there is “something profoundly disturbing about a face-scanning algorithm deciding whether you deserve a job”.

Should face-scanning algorithms have the power of gatekeeping?

Most of us realize that it is hard to escape bias in our daily lives, especially at the workplace. Bias may be conscious or subconscious. Bias may be cognitive, informational, systemic, or selective. However, we can all agree that bias can create toxic workplaces and lead to discrimination.

Many researchers of Artificial Intelligence claim that automating the hiring process may, in fact, eliminate human bias. Some advocates go so far as to argue that bias may, in fact, be beneficial and enable us to make the right decisions because they rely on concrete data (See article).

However, facial recognition technologies do exactly the opposite of what they are expected to do when it comes to eliminating bias. Giving power to an online video editing software to pre-screen you during the preliminary rounds of a selection process, instead of allowing a recruiter who speaks to you and asks you follow-up questions to get more context about your individual candidacy, is the epitome of perpetuating bias in the system.

Increasingly, many companies are relying on automated computer systems and evaluate your candidacy on the basis of your facial gestures, speaking voice, word choices, and answers. This trend seems to have become even more popular due to the current COVID situation when companies prefer to keep in-person engagement at a minimum.

However, using facial recognition technologies and AI algorithms in online interviewing software is a problem because of its reliance on superficial data to assess a person’s abilities to do a job. A number-crunching accountant may not be the best communicator. A brilliant researcher may look awkward on camera. A non-native speaker with a foreign accent may not register appropriately on the computer system. A non-technical person may easily be dismissed as unsuitable due to his inability to navigate the software. A nervous candidate with a softer tonal quality may be considered sneaky or unproductive.

What is even more frustrating is that companies can get away with these discriminatory hiring practices without even interfacing with a candidate. Now, they do not need to have uncomfortable conversations over the phone or send emails that could potentially hold them liable for discrimination. Above all, they do not need to invest time or man-hours in the hiring process.

Facial recognition technologies are also being increasingly used by schools, police departments, local governments, and financial firms to create your profile and determine your social, professional, and economic future.

So are facial recognition systems inherently biased?

First off, AI systems are based on the “priorities and prejudices” of people who design them and inevitably reflect a “coded gaze”.

Bias is often introduced because of the size and the type of dataset used to model AI. For machine learning and AI models to be effective, datasets must be large, complex, and representative. However, the algorithmic models on which facial recognition software is based focuses on a very specific kind of “ideal employee”. Based on available data, a white English speaking male is most likely to be classified as the ideal candidate with a high potential to succeed in the upper echelons of the corporate world.

In most cases, facial recognition technologies do not require you to be complicit in sharing your data. A company called Clearview AI now claims to have made a breakthrough in facial recognition technologies by expanding the dataset of faces that it lifted from various social media sites. Not all us can claim to have a squeaky clean social media profile and to think that this may be used for employment verification or to get a home loan is a daunting prospect at the very least.

Additionally, if you are a person over 40 or a person of color or have a disability, you are even more of a target for biased AI systems.

How can you fight back?

The Black Lives Matter movement has sensitized us to the unethical nature of AI-driven facial recognition software and many companies like Amazon and IBM have now banned them in their hiring process. However, the reach of these technologies may extend to other use cases like “predictive policing” or “home loan verifications”.

As this article in Tech Crunch illustrates, the profile of protestors at the BLM marches who were arrested can be fed into AI systems that create a feedback loop and may be used to determine how long to sentence a defendant, guide the police to patrol only certain neighborhoods or deny loans to them in the future.

Many cities in the United States like Portland, San Francisco, Oakland, and Boston are now taking action to outlaw facial recognition surveillance technologies that may be used for identification purposes.

We cannot have this conversation without crediting Joy Buolamwini’s research that helped to persuade Amazon, IBM, and Microsoft to put a hold on facial recognition technology. Joy also founded a non-profit organization called The Algorithmic Justice League that works to identify the harmful social implications of artificial intelligence and to raise public awareness about the impact of AI “to galvanize researchers, policymakers, and industry practitioners to mitigate AI bias”. If you are interested, you can give back by volunteering for the organization or reporting any incidents of bias that you encounter. You can read more about her work in Fast Company’s recent article here.

You can also listen to her TED talk here.

--

--

Aroshi Ghosh
Student Spectator

Art, technology, politics, and games as a high school student sees it