# Eigenvectors and Eigenvalues + Face Recognition = Eigen Faces

In this article, I would try to explain things as easy as possible, easy enough for you to get the intuition of it all. To give a smooth read, I would try to use as less mathematical equations as possible. If interested in the mathematics, links to the articles I referenced are provided at the end.

So, what are Eigenvectors and Eigenvalues?

To explain this I would use the eigenvector and eigenvalue equation stated below( I know I said little math but don’t worry, this is all the math needed).

A * v = v * λ … (1)

Here, A is a matrix, v is a vector, and λ(lambda) is a scalar quantity. So what does this equation say exactly?

First, let’s try to understand why they equate to each other. Taking the left-hand side of the equation, which is “A * v”. A * v is simply the multiplication of a matrix A and a vector v, when we multiply this matrix and vector, we get a new vector, let’s name this new vector “b”. So we have,

A * v = b … (2)

Now, for the right-hand side of (1), if it is possible to obtain vector b by multiplying the vector v and a scalar quantity λ, we have …

b = v * λ … (3)

Substituting (3) in (2) we arrive at back at (1), what this proves is that a matrix multiplied by a vector can be obtained or substituted by the multiplication of the same vector by a scalar quantity. This special vector V (special because not all vectors can do this)is known as the eigenvector and the scalar quantity is known as the eigenvalue. The “eigenvalue tells how much information an eigenvector retains after reducing a matrix to a vector ”— in here lies the definition of eigenvalues and eigenvectors. For example, on a scale of 0 to 1, if we have an eigenvalue of 0.85, this simply means that the eigenvector was able to retain 85 percent worth of information from the matrix.

This equation basically says that we can replace a large matrix with just a vector and a scalar value, put in other words this means dimensionality reduction. I would like to state that there are other applications of eigenvectors and eigenvalues but for this article, I would focus on dimensionality reduction.

What is Face Recognition?

Face recognition in my own words is basically computers answering the question of “who is this person ?” based on their facial features.

Now, the question is, what does face recognition and eigenvalues and eigenvectors have in common?

Well, for starters, we all know computers do not see objects the way we do, for a computer to see anything it has to be in numbers. To your computer, a picture is basically a matrix of integers. To let your computer know that a said picture is just the face of a friend or a family member, you have to train it on lots of face images of that friend or family member using various deep learning algorithms. Recall that I said images are just matrices, now training a computer to understand the information(i.e. who is this person) carried in these matrices would be computationally expensive. So in order to make things easier and somewhat faster, we need to reduce the dimension of these images while still retaining as much information as possible.

This is where eigenvectors and eigenvalues come to the rescue, from this point on, things become easier. When the matrices (face images) dimensions have been reduced, the resulting eigenvector is known as an Eigen face because when displayed it produces a ghostly face (images are provided in the research paper in the link section). Another advantage to this is that this method extracts the import features of the face that may not be directly linked to the facial features we intuitively think of, such as eyes, noses, lips, etc. It is also worthy to note that number of eigenvectors is equivalent to the number of matrices (face images) present, so we pick the vectors whose eigenvalues are high.

Now, this N’ dimensional vectors(eigenvectors) span an N’ dimensional subspace called a “face space” which represents all the possible Eigen faces. To understand the concept of a face space, let’s imagine we have a smart magical board that groups objects together based on their color or a specified feature. Given the fact that we have 3 sacs of balls, where a sac is filled with balls of the same color, the colors we have are blue, red, and green.

When we throw these balls on the board, the board clusters all the balls with the same color together, now let’s say some balls irrespective of color represent the letter “A” while some do not. Here, the dynamics of things change, when the board clusters, we would have a region on the board that has a combination of all the colors the represent the letter A together while the others that do not represent anything scattered about the space. Now, let’s say the balls that represent A are the eigenvectors, in other words if A represents faces, we say they are Eigen faces, then the region on the board that has the combination of balls together is called the face space. This is some what expected since faces have a similar structure, they would most likely not appear randomly in a vector space.

The next thing would be to classify a new face, that is, “is this a face or not?” so what we do is to reduce the dimension of this image, obtain the new vector, and project it onto this face space, i.e. get a new ball, make it smaller and throw it on the board, well since the board is magical it would know where to place the new ball. Mathematically speaking, what happens is, we find the Euclidean distance between the new vector to the face space. If the minimum distance passes a specified threshold to the face space we can say this is a face, the farther it is, the more confidence we have to say it is not a face.

Assuming we have different faces i.e. the balls represent face images of people. Blue — faces of Dave, red — faces of Kene ; ), green — faces of Sam. On the magical board we are going to have a face space containing three different regions of Eigen faces. When we get a new ball( a new face), make it smaller ( get the eigenvector i.e. Eigen face) and then project it onto the face space. Like the board did earlier it will find the Euclidean distance, if the ball is closest to or in the region of the “green Eigen face” we say, the new image is probably or most likely a face image of Sam.

That’s it, basically we are just applying eigenvectors and eigenvalues in the field of face recognition.

Links (References):

Research Paper:

## The Startup

Get smarter at building your thing. Follow to join The Startup’s +8 million monthly readers & +724K followers.

Written by

## Kenechi Ojukwu

A software developer interested in artificial intelligence , web development and other related fields. ## The Startup

Get smarter at building your thing. Follow to join The Startup’s +8 million monthly readers & +724K followers.

## More From Medium

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium