Advancements in Computer based Facial recognition systems

From the RAND tablet to differentiating identical twins!

Dhairya Parikh
Coinmonks
Published in
7 min readJun 30, 2018

--

Yes, you guessed it right, we are going to talk about “Computer based Facial recognition systems” today. It is one of the most hyped technology that mobile phone companies are trying to improve currently. Not only phones, but facial recognition is coming into play in the industrial sector too, but the main setback all these face is , its “Accuracy”. They are still not that accurate with the features that are used for differentiation. So let’s start from the fact that what is facial recognition exactly.

Let’s have a look at the Wikipedia definition first :

“ A facial recognition system is a technology capable of identifying or verifying a person from a digital image or a video frame from a video source. There are multiples methods in which facial recognition systems work, but in general, they work by comparing selected facial features from given image with faces within a database.”

Link to the Wikipedia page : https://en.wikipedia.org/wiki/Facial_recognition_system

According to the above statement, it is simply a system which recognizes a person’s face, as the name suggests. In this blog, we will be going in a hierarchical manner of how this system advanced with time.

This is where it all started!

RAND Tablet by Bledsoe (1960s)

It was the first semi-automated Facial recognition system!

The RAND Tablet

Many would say that the father of facial recognition was Woodrow Wilson Bledsoe. Working in the 1960s, Bledsoe developed a system that could classify photos of faces by hand using what’s known as a RAND tablet, a device that people could use to input horizontal and vertical coordinates on a grid using a stylus that emitted electromagnetic pulses. The system could be used to manually record the coordinate locations of various facial features including the eyes, nose, hairline and mouth.

These metrics could then be inserted in a database. Then, when the system was given a new photograph of an individual, it was able to retrieve the image from the database that most closely resembled that individual. At the time, face recognition was unfortunately limited severely by the technology of the era and computer processing power. However, it was an important first step in proving that face recognition was a viable bio metric.

21 Facial Markers (1970s)

Using 21 subjective facial markers for automated face recognition

Facial Markers Example

Goldstein, Harmon, and Lesk used 21 specific subjective markers, such as hair color and lip thickness, to automate the recognition. The measurements and locations needed to be manually computed, causing the program to require a lot of labor time. But it provided a better accuracy as compared to the RAND Tablet technology.

Eigenfaces (Late 1980s-Early 1990s)

Using Linear Algebra for Facial recognition!

In 1988, Sirovich and Kirby began applying linear algebra to the problem of facial recognition. What became known as the Eigenface approach started as a search for a low-dimensional representation of facial images. Sirovich and Kriby were able to show that feature analysis on a collection of facial images could form a set of basic features. They were also able to show that less than one hundred values were required in order to accurately code a normalized face image.

In 1991, Turk and Pentland expanded upon the Eigenface approach by discovering how to detect faces within images. This led to the first instances of automatic face recognition. Their approach was constrained by technological and environmental factors, but it was a significant breakthrough in proving the feasibility of automatic facial recognition.

FERET Program (1993–2000s)

Building up a commercial facial recognition market! For Data.

FERET

The Defense Advanced Research Projects Agency (DARPA) and the National Institute of Standards and Technology rolled out the Face Recognition Technology (FERET) program beginning in the 1990s in order to encourage the commercial face recognition market. The project involved creating a database of facial images. The database was updated in 2003 to include high-resolution 24-bit color versions of images. Included in the test set were 2,413 still facial images representing 856 people. The hope was that a large database of test images for facial recognition would be able to inspire innovation, that might result in more powerful facial recognition technology.

Face Recognition Vendor Tests (2000s)

Evaluation of commercially available Facial recognition systems.

The National Institute of Standards and Technology (NIST) began Face Recognition Vendor Tests (FRVT) in the early 2000s. Building on FERET, FRVTs were designed to provide independent government evaluations of facial recognition systems that were commercially available, as well as prototype technologies. These evaluations were designed to provide law enforcement agencies and the U.S. government with information necessary to determine the best ways to deploy facial recognition technology.

FRGC (2006)

The showcase of some most advanced facial recognition algorithms of that era, all at a single place!

The Face Recognition Grand Challenge (FRGC) evaluated the latest face recognition algorithms available. High-resolution face images, 3D face scans, and iris images were used in the tests. The results indicated that the new algorithms are 10 times more accurate than the face recognition algorithms of 2002 and 100 times more accurate than those of 1995. Some of the algorithms were able to outperform human participants in recognizing faces and could uniquely identify identical twins. But this advanced systems too had a lot of limitations and their results came at a high cost.

Social Media(2010- present)

The famous photo tagging feature introduced by Facebook!

photo tagging

The year 2010 marked a great change in the social media platforms all over the world, when Facebook introduced the photo tagging feature for it’s user. What this basically does is that when a person tags a photo, the system automatically recognizes the face of everyone which is present in that photo, but only if that person is a part of the Facebook family.

But this feature was not at all accurate, and was rather a Hit and miss success rate. It had difficulty in recognizing side faces, blurred images, etc. So, this when Facebook got into developing this technology along with other tech giants like Google and IBM.

DeepFace (2014–15)

A deep leaning facial recognition system

How it works!

After the failure of its photo tagging feature, Facebook got serious about this technology and started its research in the Facebook’s AI lab and named it the DeepFace.

This technology is based on machine learning sub-branch called Deep Learning. It identifies human faces in digital images. It employs a nine-layer neural net with over 120 million connection weights, and was trained on four million images uploaded by Facebook users.

The interesting thing is that the 4 million images using to train this system were taken from the profile of just 4030 active Facebook users! The accuracy of this system is 97.35%, which is still 0.28% less than that of a human.

Yes! The humans win. :)

FaceIt ARGUS

Facial recognition using your skin!

The technology used by FaceIt shows how redundant are the current facial recognition technologies that we have been using currently.

It works of the concept of “Surface texture analysis”, in which it creates a skin print of a user rather than the face print which all the current systems do. This is done by the texture analysis of the skin, which is unique for each human like fingerprints.

This technology is so advanced that it can even differentiate between identical twins, which the systems using face print really struggle to do.

This indicates how with the advancement in the technology, the facial recognition systems advanced. How computers are starting to learn like humans, and this technology has been improving day by day.

For instance, MasterCard is currently working on how to authenticate a payment just by taking a selfie!!

In the near future, we will be using our face for everything, like your home security systems, your payment authentication, etc.

It will be the password to everything that we use”.

This is the end of my blog.If you like my content and want more of it, please support me by following me, and please leave some claps if you like this blog!

--

--