How Facial Recognition And Artificial Intelligence Are Spreading Bias
When the iPhone X launched, and consumers were first introduced to FaceID, it was met with equal parts curiosity and skepticism. Undoubtedly a cool new feature, people still questioned the accuracy and validity of the technology. Less than a year later, facial recognition has become widely accepted as a part of our digital experience. Now facial recognition and A.I. are not only used to unlock our phones, and find friends on Facebook, it’s been adopted by decision makers to inform everything from ad targeting to criminal justice. Yet while the technology is becoming embedded in our daily lives, it still has a long way to go — especially when it comes to women and people of color.
Error, Bias, and “The Coded Gaze”
A few years ago, Joy Buolamwini, a researcher at M.I.T., realized the facial recognition software used to build one of her projects wasn’t recognizing her face. Despite identifying several other students with fair skin, the algorithm didn’t recognize her face as human. She coined the term “The Coded Gaze”, to describe the algorithmic bias that prevents facial recognition software from accurately recognizing faces with darker complexions.
When the M.I.T. Media Lab dug deeper into the bias presented by these algorithms, they found a 35 percent margin of error when it came to images of darker skinned women. Not so surprisingly women of color, the tech industries least visible group, is literally unseen by these algorithms. Few studies have been done on the bias of algorithms when it comes to race and gender. However, looking at these biases through the lens of facial recognition, it’s clear that biases in the real world can seep into artificial intelligence.
Why Artificial Intelligence Is Getting It Wrong
Facial recognition systems, like the ones used to identify people on Facebook or Google, use machine learning and Artificial Intelligence to teach devices how to “see” people. They do this by using examples of faces from data sets. Over time these devices learn how to recognize the human face based on this data. The problem is these data sets aren’t that diverse. One widely used facial-recognition data set was estimated to be more than 75 percent male and more than 80 percent white. When machines learn to process things like faces and language, they inherit gender and racial biases from data sets like these and ultimately the people that build them.
Today, companies are using facial recognition software to target their products towards consumers based on their social media profile pictures. But companies are also experimenting with face identification and other A.I. technology to make judgment calls on major decisions for things like hiring, lending, and housing which is how algorithmic bias can lead to discrimination in real life. Much like bias in the real world, algorithmic bias can enable discrimination on a much larger scale at a much faster rate.
Algorithms And Discrimination In Real Life
Last year Amazon had to do away with an A.I. hiring system that taught itself to favor male candidates. And last month Facebook (no stranger to claims of spreading bias information) was called out for facilitating restrictions to access for things like credit, housing, and jobs. The newest study claims that Facebook’s ad delivery algorithm discriminates based on race and gender, even when advertisers are trying to reach a broad audience.
If bias in hiring, lending, and housing aren’t enough to raise an eyebrow, consider the compounding effects of discrimination when incorporated into an already bias legal system. Recently judges have begun to use A.I. to calculate risk assessment for criminal behavior. However, Researchers at the Georgetown Law School estimated African Americans as the most likely to be singled out because they were disproportionately represented in mug-shot databases.
Artificial intelligence is reshaping our digital landscape and is revolutionizing the way we approach problems. But at what cost? And who is the technology leaving behind? Machines learn everything they know from us, and it’s evident that they’ve been taught our biases. They are not going to un-learn them without transparency and corrective action by humans.