Face Recognition With NodeJS! #2

Hilton W Silva
HiltonWS.com
Published in
5 min readJul 30, 2021
Photo by Erik Mclean on Unsplash

It’s a continuation of a series on how to make a face API to use with a profile identifier, that can recognize who is using the computer and open the specified Chrome profile.

To start please visit here https://blog.hiltonws.com/face-detector-with-nodejs-profile-identifier-1-70889ac0bf3c.

What we will do here

1- What are face descriptors?

2 - Detect all faces of an image, getting the descriptors

3- Compare the face descriptors with pre-captured face

4- Labeled each face with a specific label, and show if person X will be identified

What are face descriptors?

We will use a 68 point landmark that’s comes with faceapi.js, this will create a face descriptor that we can use to compare how near each face is from to the other, with it, we can label faces that have a specific descriptor or a near descriptor and predicate who is.

First, we will use some following imports

const faceapi = require('@vladmandic/face-api');
const tf = require('@tensorflow/tfjs-node')
const fs = require('fs')
const jpeg = require('jpeg-js');

What did these imports do?

First, use the API to detect and recognize faces, second, we will use tensor flow to create a tensor array, with FS we will create an away to search the files and use them, and the jpeg, convert the buffered file into a jpeg, with height, width, and data.

Let’s load those models

// If 0 is a total match
const maxDescriptorDistance = 0.6;
//Load faceapi networks
const faceDetectionLandmark68 = faceapi.nets.faceLandmark68Net;
const faceRecognition = faceapi.nets.faceRecognitionNet;
const ssdMobilenetv1 = faceapi.nets.ssdMobilenetv1;
const MODELS = ‘./models’;

These are the networks that we will use, in the next steps we will use those constants to parameterize some values.

Detect all faces of an image, getting the descriptors

We will create a function for it, that will detect all faces of an image with descriptors

async function detectFaces(img) {
//Decode image buffer to JPEG
let imgJpeg = jpeg.decode(img, true)
//Create a tensorflow object
let tFrame = tf.browser.fromPixels(imgJpeg)
//Detect all faces
let fullFaceDescriptions = await faceapi.detectAllFaces(tFrame).withFaceLandmarks().withFaceDescriptors();
fullFaceDescriptions = faceapi.resizeResults(fullFaceDescriptions, imgJpeg)
return fullFaceDescriptions;
}

These lines will create a tensor flow object from an image. The faceapi.js will return the detected faces with descriptors, those descriptors will be used to compare how near the detected face will be to a sample faces that we get from part 1.

Compare the face descriptors with pre-captured faces

Now we will search into a face folder where are the pre-captured faces. We separated the samples to have only my face, and with this, the face API can recognize if the face showing in an image is me or an unknown person.

First, we will create the function signature

async function detectFace(fullFaceDescriptions, label, fromPath) {

We have some parameters that are the fullFaceDescriptions is the faces descriptor that we will use to compare, a label, e.g. Hilton, the person we are comparing, and the fromPath that is the path of the sample image.

//If matches stops loooping to next file
let match
// Compare folder
let compare = ‘./images/’ + label + ‘/’;
// Read example folder
fs.readdir(compare, async (_err, files) => {
// For each file detect the face
files.forEach(async (file) => {
let imgPath = compare + file
fs.readFile(imgPath, async (_err, img) => {
//If matches stops loooping to next file
if(match) {
return;
}
//Decode image buffer to JPEG
let imgJpeg = jpeg.decode(img, true)
//Create a tensorflow object
let tFrame = tf.browser.fromPixels(imgJpeg)
//Detect face
const fullFaceDescription = await faceapi.detectSingleFace(tFrame).withFaceLandmarks().withFaceDescriptor()
if (!fullFaceDescription) {
return;
}
//Generarate face description and labels, can add more folders or image templates to create match
const faceDescriptors = [fullFaceDescription.descriptor]
let labeledFaceDescriptors = new faceapi.LabeledFaceDescriptors(label, faceDescriptors)

On this piece of code, we will create a match variable to stop search for other faces images, we search a faces folder (create in the previous article). To know who is on an image we create a labeledFaceDescriptors, which creates a label for each face in this case the label will be always from our parameter.

Labeled each face with a specific label, and show if person X will be identified

To identify the person we need to create the best match between the identified descriptors from sample images with more them one face, to be identified, we found the descriptor and labeled it, with it we can label each face.

Let’s do it with more code in the function

    const faceMatcher = new faceapi.FaceMatcher(labeledFaceDescriptors, maxDescriptorDistance)    // find best match from generated labels
const results = fullFaceDescriptions.map(element => {
return faceMatcher.findBestMatch(element.descriptor);
});
results.forEach((bestMatch) => {
//Ignore uknown faces
if (bestMatch.label != 'unknown')
//Face found
console.log("Found " + bestMatch.label + " on " + imgPath + " from " + fromPath, bestMatch.distance);
match = true;
return;
});
});
});
});
}

Here we create the best match with a max distance to show the predicted label, the distance is calculated between the original descriptors and face descriptors, where the max distance near to 0 is closest to match.

console.log("Found " + bestMatch.label + " on " + imgPath + " from " + fromPath, bestMatch.distance);

And here we show what we found, and can implement wherever we want with the results.

Let’s see the whole function

async function detectFace(fullFaceDescriptions, label, fromPath) {    //If matches stops loooping to next file
let match;
// Compare folder
let compare = './images/' + label + '/';
// Read example folder
fs.readdir(compare, async (_err, files) => {
// For each file detect the face
files.forEach(async (file) => {
let imgPath = compare + file
fs.readFile(imgPath, async (_err, img) => {
//If matches stops loooping to next file
if(match) {
return;
}
//Decode image buffer to JPEG
let imgJpeg = jpeg.decode(img, true)
//Create a tensorflow object
let tFrame = tf.browser.fromPixels(imgJpeg)
//Detect face
const fullFaceDescription = await faceapi.detectSingleFace(tFrame).withFaceLandmarks().withFaceDescriptor()
if (!fullFaceDescription) {
return;
}
//Generarate face description and labels, can add more folders or image templates to create match
const faceDescriptors = [fullFaceDescription.descriptor]
let labeledFaceDescriptors = new faceapi.LabeledFaceDescriptors(label, faceDescriptors)
const faceMatcher = new faceapi.FaceMatcher(labeledFaceDescriptors, maxDescriptorDistance)
// find best match from generated labels
const results = fullFaceDescriptions.map(element => {
return faceMatcher.findBestMatch(element.descriptor);
});
results.forEach((bestMatch) => {
//Ignore uknown faces
if (bestMatch.label != 'unknown')
//Face found
console.log("Found " + bestMatch.label + " on " + imgPath + " from " + fromPath, bestMatch.distance);
match = true;
return;
});
});
});
});
}

Now let’s create a function that’s will process all the things

async function process() {
// Load models
await faceDetectionLandmark68.loadFromDisk(MODELS);
await faceRecognition.loadFromDisk(MODELS);
await ssdMobilenetv1.loadFromDisk(MODELS);
// Samples folder
let faces = ‘./images/samples/’;
// Read example folder
fs.readdir(faces, async (_err, files) => {
// For each file detect the face
files.forEach(async (file, index) => {
let imgPath = faces + file
fs.readFile(imgPath, async (_err, img) => {
//Read all detected faces
let fullFaceDescriptions = await detectFaces(img, imgPath, index)
//Find single label faces, comparing from compare folder, in this case has only Hilton faces
detectFace(fullFaceDescriptions, “Hilton”, imgPath)
});
});
});
}
//Call Process after 100mssetTimeout(process, 100);

Here we load all models from the models' folder, read the images samples folder, for each image find the faces descriptors, send it to detectFace function which will find the Hilton faces and labeled the face if founded.

In the end, we just create something to call the function, that could be a direct call too.

Please see this part of the article in https://github.com/HiltonWS/profileIdentifier/

Thank you for reading, and in the next step, we see how identified the face in real-time, (maybe?). See you then.

--

--

Hilton W Silva
HiltonWS.com

Hi! I’m Hilton W. Silva, I’m a software developer who is passionate about technologies that can help people and open source. ​ Please checkout hiltonws.com