Biometric authentication in a nutshell

Gida Fábián
Cursor Insight
Published in
11 min readJan 29, 2020

A little bit of history and perspective

If you hear the word “biometric” you probably first think of yourself unlocking your phone with a fingerprint or your face. The tech-savvy ones would now say there is also retina scanning, palm scanning, identification based on hand geometry, etc. The idea of telling who you are by the tip of your finger, by the shape of your hand or by the tone of your voice is not at all new. People started chatting about biometric technologies when it has become part of their daily lives: the first fingerprint sensor phone they had, the first Face ID phone they owned and so on. This progress was a result of the threat passwords meant for cybersecurity specialists. With the spreading of the internet, identity theft and stolen credentials ascended, and they demanded a response. That answer was (or truly, is) two- or multi-factor authentication. It doesn’t necessarily need to be all biometric: it could be a password + a phone number, could be a PIN + an app confirmation, etc. But biometric solutions add an extra layer of security because they are more arduous to steal or fake.

Now, we might ask ourselves, if they are so much more secure why aren’t biometric solutions replacing binary credentials (password and PIN) entirely? My belief is, it is exactly because they are not binary. Let’s say you have a password, four characters: DV18. Whenever you log in with this password the system checks if that is the password assigned to that specific account. Every time, it is a yes or no question (binary). On the other hand, a biometric method never gives an exact yes or no: it is a result of a probability comparison. The scale is from 0 to 100%, giving how likely it is that a given sample (fingerprint, picture of a face, retina pattern) belongs to the specific account the login tries to access.

The point I am trying to make is these days the fuzz of biometric identification and authentication is not about the viability of the technology, we know that they work. The question is which one is the most reliable — giving the highest possible probability score — while being the most difficult to fake?

Let’s compare biometric mechanisms among the following aspects:
1. whether they are physiological or behavioural
2. do they provide a one-time check or continuous authentication

Physiological biometrics

Physiological (or static) biometrics are related to the specific measurements, dimensions and characteristics of your body. We call them static, because your vein patterns, fingerprints and retina are all nearly inalterable. Once they are registered in software and hardware systems, they will remember them the next time (and so on) a user tries to access a protected device.

Fingerprint

Without any need for introduction, the fingerprint is perhaps the most common solution. It has been around the second-longest, first being handwriting, more precisely signatures). The method of identifying criminals by their fingerprints had been introduced in the 1860s by Sir William James Herschel in India, and their potential use in forensic work was first proposed in 1880. The first step towards today’s fast-working, seamless adaptation was in 1969. The Federal Bureau of Investigation (FBI) began its push to develop a system to automate its fingerprint identification process, which was quickly becoming overwhelming and required many man-hours. Jumping ahead to the ’00s: Toshiba came out with Toshiba G500 along with G900 in 2007. They were the first two phones with built-in fingerprint scanners, but after their announcement, they never actually made it to the market. The first publicly available device to have fingerprint sensor was the Motorola Atrix, which launched in 2011. The phone and its predecessors used an optical sensor. The first device to have a capacitive fingerprint sensor was the iPhone 5s in 2013.

Face recognition

The unique shape and colour of your face give a sufficient base for identification and authentication. The first semi-automatic face recognition system was developed by Woodrow W. Bledsoe under contract to the US Government in the 1960s. This system required the administrator to locate features such as eyes, ears, nose and mouth on the photographs. This system relied solely on the ability to extract usable feature points. It calculated distances and ratios to a common reference point that was compared to the reference data.

Later, in 1988 the first semi-automated facial recognition system was deployed as the Lakewood Division of the Los Angeles County Sheriff’s Department began using composite drawings (or video images) of suspects to conduct a database search of digitized mugshots.
Face detection was pioneered by 1991, making real-time face recognition possible. Turk and Pentland discovered that while using the eigenfaces techniques, the residual error could be used to detect faces in images. The result of this discovery meant that reliable real-time automated face recognition was possible. They found that this was somewhat constrained by environmental factors, but the discovery caused a large spark of interest in face recognition development.
Today’s Face ID is a different technology using an infra-red 3D scanner similar to the Microsoft Kinect technology. Identification is performed on the 3D shape of the face, and not on an image. The first widespread application was built in the Apple phones (available on the market in November 2017).

Heartbeat

Everybody’s heart beats in a different rhythm. Like the iris or fingerprint, our unique cardiac signature can be used for identification. Shockingly, it can be done from a distance. The application these days is mainly limited to military use, however, just as the majority of other methods, heartbeat recognition surely will break into commercial use.

Hand geometry

The first systematic capture of hand images for identification purposes was recorded in 1858. Sir William Herschel, working for the Civil Service of India, recorded a handprint on the back of a contract for each worker to distinguish employees from others who might claim to be employees when payday arrived. This was the first recorded systematic capture of hand and finger images that were uniformly taken for identification purposes.
The first commercial hand geometry recognition systems became available in the early 1970s, arguably the second commercially available biometric device after the early deployments of fingerprinting in the late 1960s. These systems were implemented for three main purposes: physical access control; time and attendance; and personal identification. The US Army began testing hand geometry for use in banking around 1984. Later, in the same year as the hand vein scanner (1985), a patent for hand identification was awarded as well.

Retina scanner

A retinal scan is a biometric technique that uses unique patterns on a person’s retina blood vessels. A retinal scan is performed by casting an unperceived beam of low-energy infrared light into a person’s eye as they look through the scanner’s eyepiece. This beam of light traces a standardized path on the retina. Because retinal blood vessels absorb light more readily than the surrounding tissue, the amount of reflection varies during the scan. The pattern of variations is digitized and stored in a database. The idea for retinal identification was first conceived by Dr Carleton Simon and Dr Isadore Goldstein and was published in the New York State Journal of Medicine in 1935. Commercialization took place in 1981. Nowadays retina scanners are mainly used by government agencies, but commercial usage is growing as the technology becomes more and more accessible.

Iris recognition

Ophthalmologist Frank Burch proposed the concept of using iris patterns as a method to recognize an individual in 1936. The patent was awarded stating that the iris can be used for identification in 1986 by Dr Leonard Flom and Dr Aran Safir. Dr Flom approached Dr John Daugman to develop an algorithm to automate the identification of the human iris.

Palm/Hand vein scanner

The technology uses the subcutaneous blood vessel pattern to achieve recognition. The first patent for vascular pattern recognition was awarded to Joseph Rice in 1985. As with irises and fingerprints, a person’s veins are unique. Twins don’t have identical veins, and a person’s veins differ between their left and right sides.

In conclusion, fingerprint and face recognition are the two mainstream physiological biometrics. The rest (mainly because of their heavy hardware requirements) have less presence in our everyday lives. However, the vulnerability lies at the front line: stolen databases could bring the nightfall of fingerprint and face recognition. In the linked breach, metadata was involved as well. Imitating and creating a fake fingerprint or 3D map of a face are still demanding tasks, nevertheless, once a hacker knows the value of the account at risk, it might be worth the effort.

Behavioural biometrics a.k.a Behaviometrics

Behavioural biometrics are the habits we have and how we usually do things. How we walk, how we speak, how we articulate and so on. Our digital fingerprint belongs here as well: let’s say you log in to your online bank to initiate a transfer. Do you copy-paste the bank account number to be credited, do you type in the numbers with the Numpad or do you use the numbers above the letters on your keyboard? That specific habit is a metric that can be observed and tied to your digital fingerprint.

“Although relatively secure if we were to put passwords next to them, static biometrics may fade in the ability to ensure the proper security in comparison to dynamic solutions, which behavioural biometrics represent.

Let’s see a couple of examples.

Handwriting

Forensic experts are able to tell with high confidence whether a signature or a handwriting sample belongs to a certain individual. Although forensic experts only inspect and verify the authenticity paper-based, contracting and other signature-demanding documentation are shifting towards digitisation. With the development of signature recording devices, other, previously not-examinable data became available. Such is time, pressure, velocity, etc. Be careful with the device data though, we discussed here the raw device data topic. Thoughtfully implemented, dynamic handwriting and signature recording can lead to great products. We at Cursor Insight already have a product capable of e-signature verification.

The first patent for the acquisition of dynamic signature information was awarded in 1977. Veripen, Inc. published a “Personal identification apparatus” that was able to acquire dynamic pressure information. This device allowed the digital capture of the dynamic characteristics of an individual’s signature. The development of this technology led to the testing of automatic handwriting verification (performed by The MITRE Corporation) for the Electronic Systems Division of the United States Air Force.

Handwriting possesses, however, limited space for application: processes where a signature or a handwriting sample is required. Because of the entry barriers (hardware needed) the technology spreads modestly.

Voice

A Swedish professor, Gunnar Fant, published a model describing the physiological components of acoustic speech production in 1960. His findings were based on the analysis of X-rays of individuals making specified phonic sounds. These findings were used to better understand the biological components of speech, a concept crucial to speaker recognition.
The original model of acoustic speech production, developed in 1960, was expanded upon by Dr Joseph Perkell, who used motion X-rays and included the tongue and jaw. The model provided a more detailed understanding of the complex behavioural and biological components of speech.

The problem with voice recognition is that with modern technology it can be faked. Earlier this year, the first reported fraud was reported by cybercrime experts, where criminals used AI to mislead a voice recognition system.

Mouse and keystroke dynamics

Fine motor movements captured while moving the cursor, tapping a phone or typing on a keyboard also provide a level of uniqueness that cannot be exactly repeated. The first record is J. Garcia’s identification apparatus patent in 1986. Unlike static identity verification systems in use today, a verifier based on dynamic keystroke characteristics allows continuous identity verification in real-time throughout the work session. This leads to my next point.

One-time vs. continuous

One-time authentication methods, usually the physiological ones, are excellent, but they only provide security at given times. Adding two-, third- and multi-factor authentication for login will make it more secure and usually, additional authentication is required at certain, high-risk activities: initiating a transfer, paying with a card, changing a password and so on. The main problem is everything we ask a user to do that doesn’t push them closer towards their goal will cause friction. Additionally, one-time authentication methods are easier to imitate and once a fraudulent user gains access, there is nothing to stop them.

Continuous authentication is something that constantly runs in the background without additional input from the user. It simply observes the user behaviour and activity while identifying (based on complex evaluation methods) at a high frequency, meaning the authentication evolves into a constant process — whatever the user does is a form of authentication. It could be an integrated system of mouse dynamics, keystroke dynamics and the user’s digital fingerprint (i.e. IP address, browser version, etc).

Most behavioural biometrics are based on Machine Learning models. This enables them to improve authentication accuracy continuously. The longer they can record data (monitoring user behaviour), the more accurate features and unique user characteristics they can create, and eventually the more accurate the authentication will be.

The Next Level: Mouse Movement Authentication

Out of all the behavioural biometric solutions, we at Cursor Insight think that mouse dynamics provide the most reliable and most accurate method. In the world of authentication, one must take into consideration the already mentioned customer friction: a falsely denied legitimate transaction could lead to customer churn. Avoiding a fraudulent transfer should be the top priority, but false positives (e.g. determining a legitimate transfer as fraudulent) pose a serious issue too. Biometrics that do not provide accurate enough authentication could lead to somebody trying to buy a microwave online and getting a denied card payment. VISA has a study on this: 62% of cardholders will abandon a card or move it to the back of the wallet after a decline, causing lost business for the card issuer. All because of an anti-fraud system monitoring habits. Why? Let’s hop back to the money transfer. In 98 cases out of a 100, you would type in the recipient account number with the Numpad, but you break your right arm and you use the numbers above the letters. The transfer gets declined and you can start the process all over again.

That’s why we use mouse dynamics.

They are somewhat hard to record and even harder to analyse. Mimicking the user’s mouse dynamics is extremely difficult. At the same time, mouse dynamics can accurately be recognised and can be continuously checked (unlike your fingerprints, for example).

I am not suggesting you should completely throw away all your existing authentication methods. Biometrics should work alongside in a combined system: while physiological biometrics provide a quick check at the login moment — even if they are relatively easy to steal — continuous authentication should be used during the user session. With each additional layer the security increases. The more information a hacker needs to steal to hack an account the safer your customer data is.

This article was heavily based on this listicle from biometricupdate.com, make sure you check out their summary as well.

--

--

Gida Fábián
Cursor Insight

Product Manager, with a contemplative curiosity towards the world.