Nuclear Explosion, Imgur

The Plutonium of AI

Eve Hartwell
Digital Shroud
Published in
6 min readMay 31, 2021

--

Artificial Intelligence is one of the newest and exciting discoveries of our lifetime. Face recognition software is making its way in various fields from smartphones to healthcare. It is used to unlock phones, make secure transactions, improve advertisement, and so on. However, there are growing concerns about how efficient this technology is in our society. We assume that AI performs solely on cold calculations but in reality, it shares the same prejudices that we have. Unfortunately, AI and facial recognition have extensive issues regarding privacy and racial bias.

A recent study by Harvard University reveals the possibility of racial inequities in face recognition algorithms. The recognition systems of big tech companies like Microsoft, Amazon, and IBM consistently failed to identify the faces of darker-skinned women. These errors occurred up to 34% more than in lighter-skinned men. To make matters worse, the ACLU conducted a study in which 28 POC members of Congress were incorrectly matched with mugshot photos.

Audit of five face recognition technologies, Harvard University

“One in two American adults is in a law enforcement face recognition network.” — Georgetown Law

Advanced camera surveillance systems are often implemented by law enforcement agencies to identify suspects and help find missing people. In reality, it is used to identify faces in real-time, run drivers against mugshot databases, and add driver’s licenses to their records. More startling is how it overwhelmingly targets low-income communities. Black communities are surveilled excessively by police and are overrepresented in mugshot databases determined by face recognition. Moreover, several cases of POC that were arrested under false pretenses are directly tied to the usage of this technology. This tool has been used by law enforcement for decades yet little has been done to correct these errors. These systems are not even required to be tested for accuracy and bias before they are deployed.

Camera Wall, Lianhao Qu

The Toxic Formula

The problem lies in how algorithms are trained. It is so easy for algorithms to learn from the unintentional biases of humans. Even the location in which the technology is produced plays a big factor in its recognition accuracy amongst different races. Naturally, an algorithm developed in Asia is better at detecting Asian faces than detecting Caucasian faces. The same scenario goes for Europe and other areas. Its overall performance relies on the range of ethnicities in the test group as well as false-positive thresholds and various factors relating to image quality. Therefore, its accuracy is entirely dependent on what it is fed.

AI is extremely unpredictable with varying complex outputs so measuring its capabilities is difficult. This means that there is no way to completely guarantee its accuracy in the first place. Furthermore, designing an algorithm becomes a significant challenge. In fact, the whole building process should be reevaluated from data gathering, modeling, testing, and deployment since all elements are faulty. Measuring performance has been especially irregular due to these factors. Academics and experts cannot even form a consensus on how to address this disparity. Not to mention, there are more dangerous issues like the corrupt practices used by police departments such as the NYPD which override the fairness of these systems.

Regulating Nuclear Waste

Thankfully these red flags influenced Microsoft and Amazon to follow IBM’s decision to halt their sales of facial recognition technology. IBM specifically citing the implications of mass surveillance, racial profiling, and human rights violations. Now we must target the main culprit for these dire concerns — lack of testing, limited diversity, inaccurate models, and low-quality equipment.

Algorithms must begin learning from a diverse pool of subjects rather than mostly white males. Machines cannot be expected to perform efficiently with missing data. Additionally, acquiring a diverse development team to build systems can provide distinct lived experiences. This ensures that issues are properly identified and that no groups are undersampled or misrepresented. Moreover, for algorithms to work properly, camera settings must be optimized to capture darker skin tones because default settings result in low-quality database images. Consent represents another important aspect of the solution. It is absolutely imperative that every participant in these datasets is willing and agreeing to have their faces used in these studies.

Testing is an overall crucial step for a robust and productive machine. It is even more essential for a complex algorithm. Since this process is not exactly valued by companies, The National Institute of Standards and Technologies conducts voluntary tests every four years to hold companies accountable. However, we should encourage companies to conduct internal tests on their own methodologies to check for biases. They need to get into the habit of using rational and compelling testing procedures while not relying on simple accuracy results to determine how well they are truly performing.

1984: “Big Brother is Watching You”, George Orwell

The implications of algorithmic errors are so notable that legislation needs to be enforced to guarantee that recognition software preserves privacy and is nondiscriminatory. The threat of mass surveillance especially on the black community must be eliminated. Luckily, these demands have been met with some legislative action. In April of 2019, the Algorithmic Accountability Act was introduced to enable the FTC to regulate companies and tackle the issues of data privacy, algorithm training, and accuracy. A recent police reform bill has also proposed restrictions on the use of face recognition technology. There are several other current instances of pushback by the global community which has sparked some hope for a less heavily policed environment in the future.

“Face recognition, when it’s used most aggressively, can change the nature of public spaces.” — Alvaro Bedoya

Do We Need Bombs?

Would creating an extensive database of faces be truly valuable in our society? The use of facial recognition has been deemed “intrinsically socially toxic” by Microsoft researcher, Luke Stark. Additionally, he believes that recognition technology should be banned for most practical purposes. The consequences of its usage mean the sacrifice of privacy and the right to act freely in public. Clare Garvie of Georgetown Law asserts that the usage of face recognition technology is setting our society up for a perpetual police lineup. The likelihood of this scenario is extremely severe as the incarceration rate in America continues to increase. Soon enough, in our daily lives, we may be relying on AI to make decisions on college admissions, employment, credit, and several other areas of life. However, if algorithms are making the same mistakes as humans, advancing with this technology may not be even worth it in the long run. The benefits of facial recognition are narrow compared to what can be done without it. It is simply not necessary for several tasks and it certainly should not be used for life-changing ones.

The stakes are high. Machines have the power to decide the freedom of citizens. It would be a huge misstep to use artificial intelligence as a substitute for the prejudices of human judgment. The benefits and futures of facial recognition must be weighed equally with its aforementioned deficiencies. At this moment, we must make a clear decision on whether or not these algorithms are worth advancing or best left behind.

References

Marks, P. (2021). Can the biases in facial recognition be fixed; also, should they? Association for Computing Machinery.Communications of the ACM, 64(3), 20. doi:http://dx.doi.org.ezproxy2.library.drexel.edu/10.1145/3446877

Racial Discrimination in Face Recognition Technology

The Perpetual Line-Up

Facial Recognition Faces More Proposed Bans Across U.S.

Facial-Recognition Software Might Have a Racial Bias Problem

Hold Artificial Intelligence Accountable

--

--