Face Recognition: Bias and Privacy Concerns

Chris DaređŸ”„
The Startup
Published in
7 min readDec 5, 2020
The images in the header section were sourced from Unsplash

An audio version of this article is available on soundcloud. https://soundcloud.com/dexios1/face-recognition-privacy-and-bias-concerns.

Introduction

Seminal improvements to face recognition technology in recent years have caused it to gain a seat at the table of biometric authentication and recognition innovations. Many countries have taken different approaches to the adoption and deployment of face recognition technologies. Consider opposing ends of the spectrum: Some countries (such as China) widely embrace its use while European countries under the GDPR are constrained by policy and legislation regarding its use. Other countries whose technology has fast outpaced their law, such as the USA, have quickly begun to develop additional policies and legislation regarding the use of facial recognition and data collection required to make it possible. [1,2] In this article, I highlight some concerns relating to bias and privacy in the deployment and use of face recognition technologies. Let’s fire away!

Racial and gender bias in face recognition

Research has shown that bias inherent in machine learning and AI datasets/models has the ability to reflect and cause harm in matters relating to race and gender [3]. This became a hot topic in natural language processing (also my favorite part of AI) but has now been proven in areas of computer vision — specifically, face recognition. Recent studies have also highlighted some high risks associated with the current SOTA face recognition technologies — especially with regards to gender misclassification. In a famous paper, Joy Buolamwini and Timnit Gebru demonstrated the bias in face recognition towards gender features that result in misclassification of individuals. Face recognition technologies such as Face++ as well as Microsoft’s Face API and IBM’s Watson Visual recognition application all yield significantly high classification errors of dark skinned women with rates as high as 36% [4]. Joy later shared the results of sample tests in her poem “AI, Ain’t I A Woman?” on YouTube. It was seen that face recognition technologies well respected by the public (at the time) gave high probability scores in their gender misclassifications of prominent and influential women in society. Images of famous personalities such as Oprah Winfrey, Michelle Obama, Serena Williams and Shirley Chisholm were all misclassified as men with accuracy ratings as high as 89% [5]. In use cases such as search or identification, this could likely place this category of individuals on the wrong/unjust side of the law or cause them to be denied access to a utility or service which they rightfully deserve to take advantage of.

Privacy risks

The law also needs to grow with the ubiquity of face recognition in order to meet the security and privacy threats that this technology faces. One such is deepfakes. Deepfakes use deep learning, a prominent form of artificial intelligence, to map the face of one person unto another person. Think of it like swapping faces — not only in photos, but videos as well. One of the most famous deepfake videos on the sane internet is the impersonation of Barack Obama by Jordan Peele
and oh boy, this version of Obama said some interesting things[6]. The prominence of deepfakes in 2019 also brought attention — as well as criticism; privacy and security concerns — to facial recognition technologies — especially in China where face recognition has been widely adopted for many important use cases in governance and business (e.g. surveillance and payment authorization) [7]. The Beijing News in particular, highlights the situation in China. They reveal that “artificial intelligence is not only a technical problem, but also a governance examination question”. They allude that personal privacy is growing scarce especially with the advent of face recognition and the emergence of deepfakes as seen in the Chinese app Zao (at the time of their writing). What happens when someone steals your face and uses it for nefarious purposes? The news agency concluded that legislation needs to catch up with artificial intelligence and companies currently need to observe ethical concerns in the design and use of AI models for face recognition[8]. (Now it’s important to note, that China is currently in the process of enacting their data protection and privacy law.)

Final thoughts

The fight to ensure a fair and balanced use of facial recognition technologies is not yet over. However there have been promising responses and results. We have since seen corporations respond to this call. Companies such as Alipay have added additional verification to determine whether an image being used to verify a payment is from a video, image or a fresh live scan of a person’s face[9]. That’s a great step to progress. Next up would be accurately detecting deepfakes. In the US, Amazon also published best practices for using its facial recognition API [10]. Following police brutality against people of color earlier this year (2020) and the accompanying protests, big tech companies such as Amazon, Microsoft and IBM have discontinued programs providing law enforcement agencies with access to their facial recognition technologies. While these decisions have been implemented in many flavors, it is still a step in the right direction. Many critics do, however, deliberate whether this will only be a 9-day wonder. [11]

Face recognition, just like many other technologies, promises a great way to ease our living and activities. However, until we develop the right laws and policies to view and manage this technology, it may cause more harm than good. There are many lessons we can learn from our experiences with good and bad cases of the use of this technology. Ultimately, it is of uttermost importance that the voice of the public is highly considered in the design and deployment of such technologies.

Joy’s poem

“Obliged to you for hearing me, and now old Sojourner ain’t got nothing more to say.” — Sojourner Truth.

I’ll conclude with Joy’s poem: “AI, ain’t I a woman?” Enjoy:

Other notes

A lot has happened since I drafted this article in October 2020. A lot will happen after I publish it as well. I will use this section to provide additional notes which I believe are relevant and, as Jack Dorsey puts it, “point to a broader conversation” on this topic. I will also hyperlink references here instead of using IEEE style as I feel they are more appropriate for this exploratory section of the article.

November 2020

We had a talk concerning Bias in AI at CMU-Africa with the GIZ’s FAIR Forward and Stanford’s Renata Avila. Renata’s presentation was very informative. As aspiring researchers and policy makers, our students took pointers concerning designing and developing ‘fair machine learning models’. Amongst the many things she discussed, she shared information on efforts some researchers are making to mitigate bias in AI models. If you’re interested in learning more about it, read it here.

December 2020
I was just about to publish this article when the bomb dropped: Timnit Gebru gets ‘fired’ from Google. $%#@!?!

Backstory: Timnit Gebru is a world renowned AI researcher best known for her work in the bias and ethics of AI. I mention her paper concerning accuracy disparities in gender classification in this article[4]. Timnit and a few colleagues were asked by staff at Google to dissociate Google from a paper they co-authored for some reasons. Timnit gave them an ultimatum to drop it or she leaves. Google agreed to the latter.

The details of her departure from Google are out of the scope of my article and there’s also a lot we still don’t know surrounding these circumstances. Consequently, I will refrain from giving or linking details so as not to accidentally spread misinformation.

But here’s why I bring this up: Because of how important Timnit’s work is in the area of bias in AI, I believe this will draw a lot of attention to her papers
which is a good thing. Hopefully, we will still see some collaboration between Jeff Dean and Timnit in the future; Twitter convos, at least. The paper she recently co-authored is titled “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” If you have a copy/link to the preprint, kindly leave it as a comment to this article. I look forward to reading it myself.
If you’d like to read Timnit’s other papers, check them out at PapersWithCode.

What a way to enter Neurips this week though!

New note to data scientists researchers, ML engineers/enthusiasts:

NeurIPS 2020!!

At the time I’m writing this NeurIPS 2020 (nips) has just kicked off!!! Super exciting! (Unfortunately for me, I have to juggle between nips and CMU homeworks :-\ ). We’re 40 minutes into nips and both ongoing presentations are hitting the nail on the head when it comes to fairness in AI. Speakers from Netflix and Scikit-learn are discussing ways to mitigate bias in reinforcement learning (Netflix) as well as traditional machine learning (sci-kit learn). I would try to provide updates on nips with respect to bias and privacy sometime later. FYI, p(doing_this)=0.4

References

  1. BBC News, “Facial recognition: School ID checks lead to GDPR fine”, 2019. [Online] Available: ​ https://www.bbc.com/news/technology-49489154​ [Accessed: 13-Oct-2020]
  2. 116th Congress (2019–2020), “S.847–116th Congress (2019–2020): Commercial Facial Recognition Privacy Act of 2019 | Congress.gov | Library of Congress”, 2019. [Online] Available: ​ https://www.congress.gov/bill/116th-congress/senate-bill/847​ . [Accessed: 14-Oct-2020]
  3. T. Bolukbasi, Kai Wei Chang, J. Y. Zou, V. Saligrama and A. Kalai, “Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings” in Computing Research Repository (CoRR), 2016.
  4. J. Buolamwini and T. Gebru, “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification” in Proceedings of Machine Learning Research no. 81, pp. 1–15, 2018.
  5. J. Buolamwini, “AI, Ain’t I A Woman? — Joy Buolamwini — YouTube”, 2018. [Online] Available: ​ https://www.youtube.com/watch?v=QxuyfWoVV98​ [Accessed 12-Oct-2020]
  6. BuzzFeedVideo, “You Won’t Believe What Obama Says In This Video! 😉” on YouTube, 2018. [Online] Available: https://www.youtube.com/watch?v=cQ54GDm1eL0
  7. G. Shao and E. Cheng, “The Chinese face-swapping app that went viral is taking the danger of ‘deepfake’ to the masses”, 2019. [Online] Available:
    https://www.cnbc.com/2019/09/04/chinese-face-swapping-app-zao-takes-dangers-of-deepfake-to-the-masses.html [Accessed 14-Oct-2020]
  8. The Beijing News, “AI changes faces to cause controversy, user privacy can not be violated”, 2019. [Online] Available:
    http://epaper.bjnews.com.cn/html/2019-09/01/content_763972.htm?div=-1​ . [Accessed: 14-Oct-2020]
  9. G. Shao and E. Cheng, “Growing backlash in China against A.I. and facial recognition” 2019. [Online] Available:
    https://www.cnbc.com/2019/09/06/ai-worries-about-the-dangers-of-facial-recognition-growing-in-china.html [Accessed 14-Oct-2020]
  10. Amazon Web Services, Inc, “Use cases that involve public safety”. [Online] Available: https://docs.aws.amazon.com/rekognition/latest/dg/considerations-public-safety-use-cases.html​ . [Accessed: 12-Oct-2020]
  11. K. Hao, “The two-year fight to stop Amazon from selling face recognition to the police”, 2020. [Online] Available: https://www.technologyreview.com/2020/06/12/1003482/amazon-stopped-selling-police-face-recognition-fight/​ [Accessed: 12-Oct-2020]

--

--