Facial recognition & Facebook
As social animals, one of the most important skills for humans is the ability of recognizing faces — your family, your friends, a celebrity etc. Our life in society is strongly dependent on this, and is really impressive how well we can do that. In fact, this is result of millions of years of evolution, so that a large part of our brain is “hardwired” to identify faces. Ever wondered why sometimes we see faces in clouds, rocks, hurricanes etc? Well, things start to make sense when you learn about such facts regarding our brain.
Entering the context of computer vision, even differentiate between absolutely distinct objects (e.g. a banana and an apple) is not a trivial task, requiring multiple color/spatial analysis. And then Facebook appears and starts to ask “hey, isn’t that you in this photo?”. More than that, it is almost always right — what the heck, how does it do that ?!
Yes, machine learning once again appears for the win. Typically, approaches related to edge detection, filters, transformations (Fourier, Wavelets etc.), areas, distances, angles etc. were (and still are) used for general object recognition. In the case of face identification, the “normal” approaches consisted in identifying eyes, nose, chin, ears in frontal images to say “here, there is face in this image”. But this solves only a part of the problem, with solutions not that robust to changes in angle, position, illumination etc. As quoted in []:
“the variations between the images of the same face due to illumination and viewing direction are almost always larger than image variations due to change in face identity.”
After detecting a face, in order to compare it with others is necessary to first align and represent it properly. For these steps Facebook’s solution make use of neural networks. Composed by 9 layers and with more than 120 million parameters, this network is trained on a dataset of 4 Million facial images belonging to more than 4,000 identities [].

For the dataset Faces in the Wild, composed by 13,000 photos of celebrities with different clothes, hairstyles etc., it yields an accuracy above 97% (pretty similar to human performance).
Besides obvious applications as surveillance and fun (Facebook social network itself), this can also be useful for privacy. For example, with alerts such as “hey, a photo of you was upload at this link right now. Are you ok with that?”. As future work, they intend to also understand empathy, identifying emotions, a type of information that could be helpful for many applications — as the ones here previously discussed for improving the quality of life of seniors, for example.
REFERENCES
http://fortune.com/2015/06/15/facebook-ai-moments/
DeepFace: Closing the Gap to Human-Level Performance in Face Verification