Face Detection With Face-API.js and JavaScript!

Anderson Quinones
5 min readSep 10, 2020

--

source: https://www.ariadnext.com/facial-recognition-new-era-for-online-identification/

While using Facebook, have you ever wondered how the system is able to recognize and suggest people to tag in the pictures you upload without you giving them any information?

I remember the good old days where Facebook used to make you click in the image, tag everyone in the picture . Nowadays, Facebook already does the job for you, all you need to do is confirm the name of the person and the picture can be uploaded.

source: https://www.npr.org/sections/thetwo-way/2017/12/19/571954455/facebook-expands-use-of-facial-recognition-to-id-users-in-photos

Facial detection, it’s a way of identifying a human face and digital images or videos. Numerous companies have used this technology for security, social media, law-enforcement, entertainment and other fields that we interact with on daily basis. When it comes to facial detection, there’s many ways of doing it, we’re going to use face API JS library built on tensor flow to set up the face detection.

Prerequisites

The very first thing we need to do is include the models that will be used for recognition. Create a models folder and download the models from https://github.com/WebDevSimplified/Face-Detection-JavaScript/tree/master/models

To get started, we need to create an HTML file, where we are going to use a video tag and we want to style it however we want. For this purpose, I want to do the width of 720 and a height of 650, make sure we set the height and the width and the tag, the face will not be able to be scan properly if we don’t assign any values to the width and height , and we don’t need the audio so we will make sure that its muted because we don’t need it for the facial detection. We will use the ID video so it’s available in the script.

The next thing we want to do next is do some styling and this is going to make sure that our video is centered in the page.

In order for us to have the camera working , first we need to grab the video element in our script file. Then we’re going to create a function which we’re gonna call star video, that function will connect the video from a WebCam to our program. In order to get the Webcam running, we need to use navigator.getUserMedia() which is going to take an object as a parameter {video: {} }, the second parameter is a method which is the source of the video (stream) => (video.srcObject = stream), we will also include an error function in case we receive an error (err) => console.error(err), make sure to invoke startVideo () and have your script file in the HTML file. Once those steps are done, you can copy the path of your file or in Vs Code there is a go live feature. Once you open your localhost, you should be able to see yourself in the camera.

For this next step, we will go to a script file and we want to load all the face recognition models. Since all the models are asynchronous, we want to use Promise.all which is going to run all these asynchronous calls that will make this faster to execute.. We will pass an array with all promises from the models folder, then we want to call out the startVideo function.

Now we’re going to do is set an event listener for when the video play, this event is going to take a function for the video to recognize your face. In order for the camera to recognize his face, first we want to set an interval so we can run the code multiple times inside our event listener, we are going to run this function every 100 milliseconds. We want to make sure it’s an asynchronous function because this is an asynchronous library. Now we want to get the detections, so we want to await the face API and then pass the video elements and the type of library we are going to use to detect the faces then we will use (.withFaceLandmarks) which will draw the different session on the face with dots. If we want to see our emotions, we can call (.withFaceExpressions), this would return whatever emotion you are displaying while the video is playing on your computer, with this it will detect all the faces in the webcam (it works for more than 3 people in the webcam). Now we need a canvas to be on top of the video from a camera, so we can create a canvas from the video we are receiving with (faceapi.createCanvasFromMedia(video)) and to add that canvas to the screen all we need to do is append it using document.body.append(canvas).

We want our canvas to be perfectly sized with our video, so we are going to create a variable (displaySize) that contains an object with two keys which are the width: video.width and height: video.height. Inside the setInterval, we are going to create a resized detection variable and set it equals to the (faceapi.resizeResults(detections, displaySize)) that it’s going to adjust the box to the size of our faces and the camera size as well.

In order for us to have a clear video and not have the canvas to be in front of the video the entire time, we want to do is canvas.getContext(“2d”).clearRect(0, 0, canvas.width, canvas.height) this will clear everything for us. An important detail is that we always want to match the canvas to its display size and we can do that with faceapi.matchDimensions(canvas, displaySize). Lastly, all we want to do is draw the face detections, the landmarks and the facial expression.

If you would like to get the project as is, visit https://github.com/AndersonQuinones/What-s-my-Emotion- for the second part of the code!

--

--