Going Face-to-Face With Facial Recognition

Johanna Jamison
The Startup
Published in
9 min readOct 21, 2020

Co-authored by Johanna Jamison and Sumanth Channabasappa

Once a dominion of science fiction (e.g., Star Trek,) facial recognition technology has not only caught up to us in reality this century, but awareness around its benefits and pitfalls has also risen with its heightened presence in the news over the last few months. We hope to shine some light on the reasons for this ascent and the myriad thoughts and actions it has raised. To be sure, all the complex issues, implications, and ethics surrounding facial recognition technology are far too important and expansive to cover in this piece. We also recognize there is much more worth exploring, and a variety of valid and informed views on the subject. Our aim is for this piece to be informative, unbiased, and thought-provoking as the topic of facial recognition technology continues to gain attention and relevance.

To begin, we offer a brief overview of facial recognition technology and how it is becoming increasingly commonplace. We include the underlying reasons as to why this is raising concerns, and provide references to reactions from both corporations and civic leaders alike, especially around its use for surveillance and law enforcement. We then wrap-up with thoughts on how what we have presented may be further expanded to ensure that key technical and policy issues are addressed and we can reap the many benefits — while minimizing the consequences — of this technology.

As humans, we are used to recognizing faces. The ability to visually identify our loved ones, friends, and acquaintances from amidst a crowd is due to our ability to store and recollect anthropomorphic features. This propensity not only works face-to-face or within the now ubiquitous virtual encounters, but also during non-real-time viewing of videos or photos. Our brains have the ability to store faces in memory, use what our eyes see, and leverage biological matching algorithms to identify. Sometimes our brains do this even when we haven’t seen someone in ages, or when features have changed such as with new hairstyles. Of course, we should acknowledge our shortcomings, such as challenges with identifying people of races other than our own.

In addition to evolutionary utility, we have also used this ability within our communities. For instance, to referee games, and to either confirm people’s innocence or to convict them of crimes. With regards to the latter example, studies have shown the unreliableness of human eye witnesses, and steps being taken to mitigate them such as via The Innocence Project.

Of course, we haven’t always been able to have one or more humans at every event that could have benefited from an eyewitness. Cameras have been a way to address this. The desire for more than passive “eyes,” e.g., for automated identification of humans, led to the development of facial recognition technology from limited automation efforts starting in the 1960s to the seamless unlocking of the phones of today. In a simplistic abstraction, tech mimics our own machination. It stores facial data in memory, uses cameras to see, and leverages matching algorithms to identify. Unfortunately, these anthropomorphic constructs also share a few of our flaws. Stored pictures or videos may be vague or eroded (e.g., too grainy or blurry) to be useful, learning algorithms typically need to be trained and are only as good as the tutorial data sets (similar to race, for us,) and matching is only as competent as the algorithms authors’ abilities.

However, facial recognition algorithms continue to become more efficient, and are an increasingly commonplace means of identification, including making it easier to order fast food. Coupled with the ubiquity of cameras, increasingly granular satellite imagery, and the ease with which algorithms are being developed and deployed, unchecked facial recognition technology is poised to become pervasive, even with masks. This has bubbled up privacy and trust concerns over the years. The presence of plentiful datasets (e.g., via social media, search engines) that can be mined easily for various (and not always initially disclosed) purposes — especially those that are malevolent — has magnified worries. The availability and use of such data by law enforcement are two different considerations, especially in countries that have laws to protect privacy. The lack of adequate and legally usable data sets for training facial recognition algorithms has resulted in companies like Axon banning the use of facial recognition in their body cameras for reasons explained in this article by the Ombudsman of their AI ethics Board. However, this hasn’t stopped the use of facial recognition technology by law enforcement, and has raised unease.

This amplification of worries is evident in recent news items, which have heightened the profile of these issues and drawn attention from a broad audience. For example, flaws in facial recognition algorithms have led to wrongful criminal accusations, despite an easily proven alibi in one prominent case (Ars Technica, NYT’s The Daily.) As previously alluded to, research shows the risk of misidentification is higher for non-white, non-male individuals, since facial recognition technology is predominantly ‘trained’ with pictures of caucasian men. Privacy advocates worry alongside some Black Lives Matter protestors that footage of their civil disobedience will be analyzed using facial recognition software and used for individual tracking and surveillance purposes. In at least one case in New York City, this very situation occurred. While notable on their own, today’s historical context — especially momentum for police reform in light of recent incidents that have garnered national and international attention — makes these developments even more significant. Even the pandemic has spurred a flurry of dialog about whether and to what extent facial recognition might be used to enforce quarantine and social distancing measures, and perhaps enable contact tracing. Legalities around data collection, storage, retention, the ability to opt-out, and beyond are unclear, which has only exacerbated these and related concerns.

Concurrently, and in many cases in response, to this controversy, both major corporations and a few municipalities are distancing themselves from facial recognition technology. IBM was the first major company to pull their product from the market. Amazon and Microsoft soon followed, announcing their solutions cannot be used for police purposes for one year and indefinitely, respectively. All called for regulation at the federal level of the controversial technology. This is in stark contrast to the approach of companies such as Clearview.ai, whose seemingly indiscriminate and secretive approach to growing their client base raises alarms for many industry insiders. While on-screen stories have weaved dystopian tales such as Eagle Eye and The Circle, they have also illuminated possible threats when Deepfake is coupled with facial recognition in shows such as The Capture.

From the government perspective, more than 13 U.S. cities have banned the use of facial recognition technology entirely. Many did so entirely preemptively, before any of their agencies had access to or use of facial recognition technology resources. These cities span the continuum from big to small, east coast to west — Boston, San Francisco, and Portland, ME are just a few examples. New cities are continually joining these ranks; it was just announced Pittsburgh is considering doing so also. In Portland, OR what is widely considered the strictest ban on facial recognition — encompassing not only government but also private businesses — was passed in early September.

On the state scale and abroad, rigorous data privacy standards that span facial recognition and far beyond are being considered and adopted. The California Consumer Privacy Act (also known as CCPA) is one example here in the U.S., while the General Data Protection Regulation (GDPR) is the european standard. Earlier this year, police in London were making plans to expand the use of facial recognition for law enforcement purposes, despite (now outdated) reports that the technology didn’t reduce crime in that city. More recently, police use of facial recognition technology in the U.K. was ruled unlawful. The U.S. Congress has already seen efforts such as The Facial Recognition and Biometric Technology Moratorium Act of 2020, which proposes a ban on facial recognition tech by federal agencies, and creates incentives for local and state prohibitions. Places such as Singapore have taken a different tack, employing facial verification as part of its government-issued identification system.

Here in Colorado, there are no specific laws regarding facial recognition as far as we know. And up until recently, available (though outdated) data showed the state had no facial recognition or limited data. This recent story from the Denver Post reported otherwise, revealing usage of facial recognition by law enforcement in several jurisdictions across the state, both since 2011 in cooperation with the Department of Motor Vehicles (as allowed by state statutes) using driver’s license photos and starting in 2016 via a commercial software tool using jail booking photos (since disabled by the provider, citing a lack of policy.) While recent police reform legislation (‘Enhance Law Enforcement Integrity’) passed out of the statehouse is silent on facial recognition, it could give content and clues for how lawmakers may approach the topic. Furthermore, it mandated all law enforcement agencies in the state use body cameras, which could potentially lead to an increase in the use of facial recognition on the gathered footage, as opposed to the opposite. Denver specifically has voluntarily opted out of facial recognition for law enforcement purposes, despite the failure of a local group to gather enough signatures for a ballot initiative that would ban its use.

Of course, not all applications of facial recognition technology are as consequential and divisive as those in the law enforcement realm. In some scenarios such as phone unlocking and food ordering, the tool can seem helpful. As decision-makers struggle to find pathways toward a ‘return to normal’ (or some semblance of it) facial recognition technology has been identified as a potential identity verification tool to facilitate and control entry to and movement in work and other environments, enhancing safety. It can also analyze face or skin to identify health concerns such as arrhythmia. If corroborated by other evidence and appropriately verified, facial recognition may play a useful role in the more controversial policing use case, especially if the algorithms are better and more extensively ‘trained’ with a larger and more diverse array of facial images.

There may also be simple ways to mitigate concerns via policy. For instance, to restore privacy, lawmakers could decide that facial recognition technology cannot be applied to video footage except with legal warrants. Or, for facial recognition technology to purge recognition data when people are cleared, a form of digital face blindness (a la prosopagnosia.) There are similar considerations for providers of facial recognition products. For those interested in learning more, we recommend work by the Future of Privacy Institute around categorization and uses, and Privacy Principles for facial recognition technology in commercial applications.

Amidst all of the complexity and controversy surrounding the topic of facial recognition technology — particularly when used for law enforcement — one thing is clear: this is far from the last we’ll hear of it. As the technology continues to advance and responses to it grow, it will be increasingly important for us and our representatives to have a good understanding and awareness of this topic and the various implications (e.g., privacy.) While facial recognition has immense potential in applications such as identity verification and health monitoring, it also faces a litany of pitfalls including bias and those that might be exploited — taking a page from thus far fictional (as far as we know) scripts — by Deepfake.

Thus, technologists developing facial recognition technology should pay attention to the consequences of narrow datasets upon which algorithms are trained, and pursue ways to ensure that technology shortcomings don’t inadvertently and negatively impact our societies. Like all technology facial recognition is a tool, which when used properly may be helpful, and when used improperly may be harmful. Our hope is that civic leaders will get ahead of what is becoming a messy regulatory patchwork and craft appropriate macro (e.g., international, national) policy frameworks while allowing more micro levels of government the ability to institute locally appropriate rules. This piece only scratches the surface, and offers a foundation for further research, developments, and decisions.

Acknowledgements: The authors would like to recognize and thank the following individuals for their insightful reviews and feedback that have been considered or incorporated: Anatola Araba Pabst, Tyler Svitak, and Julia Richman (in order of reviews received.)

About the Authors: Johanna Jamison is the Program Director at the Colorado Smart Cities Alliance (CSCA,) the first state-wide coalition of public, private, federal research, academic and business sector leaders. Sumanth Channabasappa (LinkedIn) is a co-founder of the CSCA.

--

--

Johanna Jamison
The Startup

Spanning sector and discipline boundaries to create thriving places