Biometrics: Ethical Implications of Future Authentication Systems

Var City UW
Dec 27, 2018 · 16 min read

This Case Study is written by Kari Bergstedt, Theresa Tran, and Kay Waller, who are Human-Centered Design and Engineering (HCDE) Sophomore students at the University of Washington

Photo by Markus Spiske on Unsplash

Introduction

This report addresses the use of biometric technology with the objective of informing the public about the growing number of ethical concerns associated with the use, storage, and accessibility of biometrics. The study of biometrics is the analysis of biodataーbiological and physiological data gained from a human subject [1]. Biometric technology can be in the form of wearable monitoring devices, authentication systems, or verification systems, but this report will specifically explore the ethics surrounding authentication and verification technologies. Authentication and verification biometric technology is the “technique of automatically recognizing individuals based on their biological or behavioral characteristics,” and does not include devices such as heart or sleep monitors [2].

Although biometrics has broadened its definition and grown in popularity, biometric technology is not a new concept. The first scientific paper discussing biometric technology was published in 1963 by Mitchell Trauring. The report examined how computer algorithms could identify someone based on unique ridges and valleys on their fingertips [3]. Since then, biometric technology has rapidly risen in popularity, giving way to devices that can use fingertips, irises, and even a subject’s walking gait to identify them [1]. However, local and national governments have been unable to keep up with this growth, and few regulations exist on biometric technology. Consequently, biometric sensors and scanners are being implemented with no consideration for unintended negative consequences.

This report examines some of the gray areas surrounding the privacy and legality of biometric technology and aims to educate the public on accessibility and on discriminatory algorithms that certain biometric technology uses. It will first address the background of biometricsーhow biometric technology works, advantages and disadvantages of biometrics, and applications of this technologyーin order to establish a baseline for discussing ethical concerns. Once the scope and definitions have been established, the report will consider ethical concerns such as privacy and discrimination, including conversations about storage, third party conflicts of interest, legality, social exclusion, when biometrics fail, and biometrics at national borders. The report will end by summarizing the main points and providing recommendations.

Background

Biometrics is a broad term, and the technology is implemented in everything from pacemakers to iris scanners. To narrow the scope of the report, biometrics will be defined as the measuring of unique physical traits that “enable the body to function as evidence [through] verification and identification practices” [4]. The definition excludes fitness and health trackers and other devices that monitor body or brain activity. This section will define biometrics and biodata, explain how the technology works, analyze the different types of biodata, and highlight applications of these systems.

Definition of Biometrics

For a trait or a physical feature to qualify as biodata, it must possess certain characteristics:

1. The trait or feature must be universal. Everyone must have at least one, it must remain constant over time, and it must not commonly be “lost to accident or disease” [1]. 2. The attribute should have properties unique to each individual [1]. 3. The feature or trait should be able to be easily measured without violating the

privacy of the individual [1]. 4. The data should not be able to be imitated, for data protection [1].

Consequently, the list of biodata currently in use for biometric technology includes: “fingerprint, facial features, hand geometry, voice, iris, retina, vein patterns, palm print, DNA, keystroke dynamics, ear, shape, odor, [and] signature” [1]. The different types of biodata are organized into two categories, physiological and behavioral characteristics, as shown by Figure 1 below [5].

Figure 1: This diagram depicts all the different types of biodata, organized into two categories: Physiological, and Behavioral characteristics [5].

For company and information security, biometric authentication and identification is increasingly preferred over Personal Identification Numbers (PINs) and passwords because it improves Human Computer Interaction (HCI). By eliminating the need to remember a PIN or a password, users can complete their task efficiently. Additionally, biometric authentication and identification systems provide greater security because the feature is unique to each user and cannot be replicated. While passwords and PINs have a tendency to be written down on sticky notes and stuck in public places, biometric technology allows users to complete their task without compromising personal information. Despite common conceptions about biometric data, there are still many security concerns surrounding the storage of biodata which will be discussed later in this report.

How Biometrics Work

Although computer algorithms for biometric technology are complicated, the underlying process is simple. The first step of all biometric systems is enrollment [1]. At this stage, an individual’s unique biodata is collected and stored, possibly among other information such as name or ID. Once the individual is in the system, each time they want to access the locked task they must scan the feature again [1]. In verification, the computer does a one-on-one comparison of the current scan and the stored data for the individual. In identification, the system performs many of these comparisons in order to find the set of data that matches the input best. The computer then decides to either unlock the task or continue to restrict access [1]. This basic protocol that all biometric systems follow is represented schematically in Figure 2 below [6].

Figure 2: This schematic details the protocol that every biometric system follows, beginning with an input of traits, and extraction of unique features, a comparison of those unique features to a database, and a final decision [6].

Pro/Con Analysis of Different Types of Biodata

There are many types of biodata that qualify for use in biometric technology. The most common types are voice recognition, facial recognition, fingerprinting and iris scanning. There are advantages and disadvantages to each type, and the type of biodata chosen depends on the priorities of the company.

Fingerprinting, for example, is easy to install and has reliable accuracy for a non-invasive feature [1]. Facial recognition provides a greater ease of interaction, but can be less reliable and more subjective than other forms of biodata. Additionally, facial recognition accuracy is greatly decreased by distance of user and background light. Voice recognition also provides an easy user interaction, but is not as accurate because of fluctuations due to illness or maturity [1]. Iris scanning is perhaps the most futuristic of biodata, and is best for identification systems. In iris scanning, a camera uses visible and near-infrared light to take a picture of the retina, analyzing the patterns found in the picture then turning them into code [1]. These systems are slightly uncomfortable to use, but the iris is resistant to change and best for longevity [1].

Applications of Biometric Technology

Regardless of the type of biodata, biometric technology systems are becoming increasingly popular. For example, many cell phones have fingerprint or facial recognition software which unlocks the phone if the user is verified [1]. Some voting centers have adopted biometrics as a means to prevent proxy voting, and many benefit payment centers have used biometric systems to protect against false claims. Banks are moving towards applications where users can pay with a glance, and many companies have identification systems for classified areas only open to cleared personnel [1].

All these uses for biometrics seem relatively harmless, but biometric systems are also being used for more serious, large-scale purposes. Many borders are implementing facial recognition systems to expedite crossing the border [1]. Therefore, it is possible that a family’s safety or future could depend on the accuracy of this biometric technology. In another example, during the 2001 Super Bowl, facial recognition technology was tested on everyone in the stadium in an attempt to catch criminals. However, the public was not notified, and the local government was forced to apologize and revert back to old security measures [7].

Discussion

The rising popularity of biometric systems and their application in mainstream society has led to many ethical concerns that need to be addressed by governments or individual companies. This section addresses privacy of biodata and the presence of discrimination in biometric systems.

Privacy

Storage

Data storage proves difficult to navigate for any type of information, but this is especially true for biometrics. Because the information collected are biological features and behavior that are unique to each individual, it is essential that the data is secure. However, “once information is collected, such information will be used” [8]. From hacks of Chipotle to LinkedIn, regardless of the measures taken to protect and store users’ information, the data can always be breached. Since biometrics is a rapidly growing method of authentication, the way in which the data is stored is crucial to users’ privacy.

Hackers and scammers evolve parallel to technology, and as a consequence, roughly 5.6 million people’s fingerprints were jeopardized when the Office of Personnel Management was hacked in 2015 [9]. Although fingerprinting is more secure than PINs, researchers from a mobile security firm “were able to break into Apple’s Touch ID system with a small piece of Play Doh” [9]. These experiments illustrate that with a simple hack, large amounts of data can be compromised and used without user consent. This situation is very dangerous because with traditional authentication methods such as a PINs, the code can be changed if an unwanted login is detected. However information such as fingerprints, faces, and voices, cannot be changed. Extra verification can increase the difficulty in using the stolen biometric data, but the core of the issue is in secure storage of the data. Companies have an ethical obligation to their users to protect their data, but many companies are setting this aside in favor of technological innovation.

Third Parties

The storage of data in large databases makes biodata more susceptible to being accessed, rightfully or not, by third parties who wish to use large amounts of personal data in order to influence future business decisions. For example, Aadhaar in India is the largest biometric database [8]. Originally, the database was voluntary for citizens to participate in. However, the database became standard in order to use basic institutions, such as receiving school meals or opening bank accounts. Therefore participation in the database became mandatory to function in society, and citizens had no choice but to provide the biometric data. Numerous non-governmental organizations were able to access the database for various purposes, all without user consent. The citizens were obliged to add their data to the database, but had no voice in who received the data. The database did provide for an easy and consistent method of authentication and identification, but because the database was so widely used, citizens’ privacy were compromised.

Elaborating on the unauthorized use of data, a user’s biometric data can be used and manipulated for private motives. Targeted advertising is a common example of this. Because characteristics such as age and gender can be automatically detected by face or voice recognition software, products and services can be advertised to select individuals [2]. Algorithmic content can provide individuals with skewed information about the world around them, increasing various forms of biases. Private organizations having the ability to use biometric data to manipulate individuals is ethically contentious.

Legality & Regulation

Biometric technology is rapidly growing and the government is struggling to keep up, turning a blind eye to ethical considerations. There is a lack of regulation regarding the privacy and use of biometric data. To begin with, biometric identification methods are legal in 48 states when in public, and is legal in all states for law enforcement purposes [9]. These minimal regulations show how biometric data can be recorded, but there is a lack of regulation in how that data is stored and protected. Additionally, there is a lack of regulation to deal with situations when biometric data is breached.

It is difficult to pinpoint what falls under the umbrella of biometrics and whether all types of biometric data should be treated the same. An analysis of the European Union (EU) regulations found that it is important to have a clear distinction between various types of biometric data [8]. From the same analysis, the EU regulations fail to provide clear rules and protection for the fundamental rights of privacy when it comes to biometric data. Legally, it is generally agreed upon that different types of biodata should be treated differently than others, but governments have been unable to dictate laws that respect those distinctions.

Since the government has poorly regulated the use of biometrics, the technology industry has had the freedom to introduce technology to the masses without concerns for the user. For example, technology companies provide hidden sentences in the Terms and Conditions explaining the contentious use and storage of users’ biometric data [9]. Because most users fail to read the terms and conditions, the user provides consent for the use of their data without being aware of how it is being used and where their data is being sent. Without clear regulation, companies and organizations are not transparent with the storage and use of biometric data. This method of receiving consent is ethically contentious because biodata can be used to access bank accounts, emails, and other sensitive materials.

Accessibility and Discrimination

Non-inclusive Biometrics

Ethical concerns surrounding biometric systems examine more than user privacy and data storage, and include discussions about the inclusivity of the technologies as they grow. For example, even the initial step of the enrollment of biometric data can prove difficult for certain groups of people, and a failure to enroll (FTE) problem may arise. FTE rates for fingerprints are typically higher for elderly usersーwho sometimes have poor circulationーand also for some construction workers and artisans, whose fingerprint ridges may be worn down due to heavy work with the hands [4]. Other biometric technology, including facial scanners and iris scanners, suffer from a preprogrammed favoring of “prototypical whiteness,” where people with typical Caucasian racial features are privileged because the technology sets these features as the norm [4]. For example, FTE rates for facial recognition tend to be higher for very dark-skinned people because video cameras are optimized for lighter skin. This can also be seen through early iris scan technology. This technology was based on grayscale capture and had 256 shades of gray, which accounted for a range of light colored eyes but left dark irises in one cluster at the end of the spectrum [4]. Non-inclusive technology and designs lead to inconvenience and frustration for minority groups which will amplify as biometrics become more prevalent.

Issues regarding accessibility are rarely considered because biometric systems seem more intuitive than typing in PINs and passwords. However, different biometric authentication systems can present multiple problems to persons with different disabilities. A study involving both a control group and a disabled group found that speaker recognition was difficult for many people with cognitive learning disabilities and for people with hand and arm disabilities because they had to press a button while speaking. Figure 3 shows this dataーwhile none of the people in the control group had low quality voice samples, higher percentages of people in the disabled group recorded low quality voice samples during the evaluation [11].

Table 1: The percentages of participants in each category that recorded low quality voice samples during the evaluation. CRMF represents the group of participants with disabilities [11].

Face verification is also difficult for certain groups that may have trouble taking a selfie [11]. The UK Passport Service ran a large-scale trial and found that 0.62% of the disabled group they sampled completely failed to enroll any biometrics data. While this percentage may seem low, across a large population it translates to a much larger number of people, which raises the ethical concern that a large number of people may have difficulty using certain kinds of biometric authentication [4].

Many groups, including people with a physical or learning disability, people suffering from mental illness, the elderly, and people of minority race or minority religion, may become increasingly socially excluded as biometrics grows and expands [10]. They can fall behind due to various reasonsーaversions to new technology, distrust of technology, or difficulties enrolling and verifying with biometric data. However, if biometric authentication becomes a requirement to receive government assistance, social services, and benefits, many of these groups will become even further disadvantaged [10]. As biometric technology becomes more popular, this will lead to an unethical lack of equity because certain populations will be excluded from receiving benefits and services that are available to others.

When Biometrics Fail

Biometric authentication can be viewed as a probability of how likely the user is who they claim to be [10]. This authentication can fail in one of two ways; it can produce a false positive (known as a false match rate) or a false negative (known as a false non-match rate). Each of these failures carries its own set of issues. If a false positive occurs, someone who should not have access to the restricted task or information will wrongfully gain access to it, which could lead to impersonation or a catastrophic breach of data. On the other hand, false negatives can prevent someone who should rightfully have access to a system or service from accessing it [10]. Occurences of false negatives and false positives have an inverse relationship, and the designer of each biometric system decides which direction and to what direction the relationship leans. Most designers are interested in having less false positives, which results in more occurrences of false negatives that can negatively impact the lives of regular users. Because of non-inclusive design, certain minority populations are disproportionately affected by false non-match rates and thus suffer more from the prevalence of false negative failures [10]. This false non-match rate is discriminatory because certain populations disproportionately face this problem more often than other groups.

Biometrics at the Border Biometrics are becoming more widespread on international levels as well. Many countries are thinking about implementing a national card with biometrics built in to verify identity. This would be useful at national borders to minimize terrorism and illegal immigration, but also raises some important ethical questions about identity. Some officials argue that relying on biometric identifiers at the border could diminish racial profiling, yet the technology is flawedーthe algorithms and search parameters were created by humans [4]. Because of this, facial recognition technology may only find matches for specific racial and gender groups, and as mentioned earlier, many of these groups will have a high failure to enroll rate which would disadvantage them as biometric technology becomes more ubiquitous [4]. In such a high-security, regulated setting as a national border line, a false negative could have high consequences. Passengers could have their identity questioned and be wrongfully detained, questioned, investigated, and even deported. To compound this issue, the discriminatory algorithms of biometrics would cause minorities to suffer at a much higher rate, undeservedly putting them through additional stressful circumstances.

Conclusion and Recommendations

To resolve the previously discussed ethical concerns, biometric technology needs to be improved. Biometrics is a growing field, and biometric systems are increasingly being used in a variety of settings. Biometric technology has potential to be a more efficient and secure method to confirm identity and authenticate users by removing the problems associated with forgetting a password and by eliminating common security breaches that occur with stolen PINs and passwords. However, the ethical complications of biometric technology must also be considered. Biometrics are unique to each individual and thus are closely tied to an individual’s identity, so privacy becomes a concern. Once biometric data is enrolled and stored, third parties often gain access to the information without user knowledge, and consistent standards have not yet been developed to regulate this data. Additionally, the algorithms and technology used to design biometric systems are not inclusive. These systems typically have a higher rate of failure for certain populations including people of color and people with disabilities. As biometric authentication moves towards becoming more of a societal necessity, these groups run the risk of being increasingly excluded and profiled.

Although the possibility of a data breach always exists, measures should be taken to increase the security and privacy of biometric data. The method by which the data is stored and protected should be thoroughly examined. The creators of the database must consider what type of biometric data is being collected and stored, why the data needs to be collected and stored, how the data is protected and secured, and who has access to the data. With these considerations in mind, the creators can improve the security of the biometric data. Once the security of the storage is sufficient, companies and organizations should be cautious with granting access to the data. Most importantly, those companies and organizations should be transparent with the use of the data. In order to hold the companies and organizations to this standard, regulations must be put into place. Governments have to hold companies and organizations accountable by providing clear and thorough rules of the collection, storage, and use of biometric data. There must also be a clear and thorough system to penalize companies and organizations that violate the regulations.

Additionally, if biometric technologies are implemented, they should be made as accessible as possible. For example, voice recognition technologies should not require a button to be held while a person is speaking because it complicates interactions for users with limited hand or arm mobility. A single tap or click of a button would be more effectiveー and designers need to test with a wide variety of users throughout their process so they can avoid accessibility problems. Furthermore, future biometric systems should not be designed from the perspective of white prototypicality. Matching and identification algorithms must be improved to account for a diverse array of ethnic features. A biometric system should not be implemented anywhere unless it has a roughly even failure to enroll (FTE) or false negative rates among different races and groups of people. Because occasional FTE and false negatives are unavoidable, consistent procedures for how to proceed when one of these issues occurs must be defined. For example, if biometric authentication fails at an airport, a simple process to re-confirm identity should be followed instead of allowing airport personnel to step in and potentially take wrongful actions based on biases. As biometric systems become more ubiquitous in society, they have great potential to create hassle-free experiences, but care must be taken to ensure that biometric technology can be appreciated equally by all users.

References

[1] P. Sareen, “Biometrics — Introduction, Characteristics, Basic Technique, Its Types and Various Performance Measures,” International Journal of Emerging Research in Management & Technology, vol. 3, no. 4, p. 109–119, Apr 2014. Available: https://www.ermt.net/docs/papers/Volume_3/4_April2014/V3N4-120.pdf. [Accessed: May 20, 2018].

[2] Y. Sun, “Demographic Analysis From Biometric Data: Achievements, Challenges, and New Frontiers,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 2, p. 332–351, Feb 2018. [Online]. Available: https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7855777. [Accessed: May 20, 2018].

[3] M. Trauring, “Automatic Comparison of Finger-ridge Patterns,” Nature, vol. 197, p. 938-940, Mar 1963. [Online]. Available: https://www.nature.com/articles/197938a0. [Accessed: May 20, 2018].

[4] S. Browne, “Digital Epidermalization: Race, Identity and Biometrics,” Critical Sociology, vol. 36, no. 1, p. 131–150, Feb 2010. [Online]. Available: http://journals.sagepub.com/doi/pdf/10.1177/0896920509347144. [Accessed: May 20, 2018].

[5] G. Naveed, “Biometric Authentication in Cloud Computing,” Journal of Biometrics & Biostatistics, vol. 6, p. 258, Oct 2015. [Online]. Available: https://www.omicsonline.org/open-access/biometric-authentication-in-cloud-computing- 2155–6180–1000258.php?aid=65608. [Accessed: May 20, 2018]. [6] D.Thakkar, “Unimodal Biometrics vs. Multimodal Biometrics,” Bayometric,. [Online].

Empowering the University of Washington’s Computer Science, Informatics and Human-Centered Design community

Var City UW

Written by

Var City UW
Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade