Did you know Facial Recognition existed in 1960s?

Why is it gaining popularity now?

Sabina Pokhrel
Analytics Vidhya
3 min readNov 5, 2020

--

Face recognition original image from Unsplash
Face Recognition (Original image from Unsplash)

Originally published in www.xailient.com/blog.

The concept of face recognition is not new, nor is its implementation. Using computers to recognize face has been dated back to 1960s.

Yes, that’s correct, I said 1960s.

“From 1964 to 1966 Woodrow W. Bledsoe, along with Helen Chan and Charles Bisson of Panoramic Research, Palo Alto, California, researched programming computers to recognize human faces (Bledsoe 1966a, 1966b; Bledsoe and Chan 1965).” [4]

Since then, face recognition has gone through many evolutions. In early 1990s, holistic approaches dominated the facial recognition community. During this period, low-dimensional features of facial images were derived using Eigenface approach.

Image shows top 36 EigenFaces [5]
Image shows top 36 EigenFaces [5]

In early 2000s, local-feature-based face recognition was introduced where discriminative features were extracted using handcrafted filters such as Gabor and LBP.

Convolution results of a face image with two Gabor filters. [3]
Convolution results of a face image with two Gabor filters. [3]

In early 2010s, learning-based local descriptors were introduced in which local filters and encoders were learnt.

Face recognition evolution timeline [1]
Face recognition evolution timeline [1]

The year 2014 was marked as important year in the history of face recognition as it reshaped the research landscape of this technology. It was the year when Facebook’s DeepFace models’s accuracy (97.35%) on LFW benchmark dataset approached human performance (97.53%) for the first time. In just three years after this breakthrough, the accuracy of face recognition reached 99.80%.

So, what changed in all these years?

All approaches up until 2014 used one- or two-layer representations such as filtering, histogram of feature codes, or distribution of the dictionary atoms to recognize human face. Deep learning-based models, however, used cascade of multiple layers for feature extraction and transformation. The lower layers learn low-level features similar to Gabor and SIFT whereas the higher layers learn higher level abstractions. That means, what different face recognition approaches could do individually back then, can now be done using just one deep-learning based approach.

Feature vector that represents face in different layers if deep learning network [1]

Is Facial Recognition at the peak of this evolution? Or do you believe that this is just the tip of the iceberg? Leave your thoughts as comments below.

Want to use to world’s smallest face recognition? Click here

Or are you more interested in the smallest and fastest face detector? Then click here.

About the author

Sabina Pokhrel works at Xailient, a computer-vision start-up that has built the world’s fastest Edge-optimized object detector.

References

[1] Mei, Wang, and Weihong Deng. “Deep face recognition: A survey.” arXiv preprint arXiv:1804.06655 1 (2018).

[2] Zulfiqar, Maheen, et al. “Deep Face Recognition for Biometric Authentication.” 2019 International Conference on Electrical, Communication, and Computer Engineering (ICECCE). IEEE, 2019.

[3] Gao, Yong, et al. “Face recognition using most discriminative local and global features.” 18th International Conference on Pattern Recognition (ICPR’06). Vol. 1. IEEE, 2006.

[4] https://www.historyofinformation.com/detail.php?id=2126

[5] https://mikedusenberry.com/on-eigenfaces

--

--

Sabina Pokhrel
Analytics Vidhya

AI Specialist | Machine Learning Engineer | Writer and former Editorial Associate at Towards Data Science