Easily build a Web App with Haar Cascades and OpenCV.js

Nabeel Khan
Artificialis
Published in
16 min readJun 22, 2023
Fig — 01 Image By Author Inspired By (rawpixel)

I’ve been building computer vision based applications for some hands on practice and curious indulgences, but they were mostly desktop native. However, just recently I came across a client who needed an application built using opencv.js. It was a fairly complex problem, which needed days to figure out considering my average java-scripting experience.

Although, I pulled through but it was a challenging ordeal. So I thought to work on my web-development skills before landing another order. After some experience, I learned that the most effective method to enhance my skills was to adopt a project-based approach and assist fellow learners in the process.

Here I am, starting the journey by trying to build a face detection web application using none other then Haar cascade and our very own OpenCV.js.

So without wasting any time, lets dive into the coding part!

Setting up the VS code editor

Before actually commencing our coding expedition, we need to setup our editor. Here’s what we’ll do

  1. Install the Live Server extension.
  2. Install Auto Rename Tag
  3. Install Prettier

We don’t need to hard code the installation process, for those who are new to the platform just navigate to the top left — just below the Run and Debug you’d see the “Extensions” icon. From there you’d know what to do!

Downloading Requisites

After setting the code editor, we need to download opencv.js package and the Haar cascade xml files. To download the opencv.js package files, copy the entire script from here and save it on a notepad as a opencv.js file. Move the saved opencv.js file to your project directory. Now download the Haar-cascade trained xml’s for face and eyes detection from this git repo. Similarly, move the downloaded xml file to the project directory.

JavaScripting

Utility.js

Now we’re all set to start the coding process. We’ll be dividing the java-scripting in two files. In the first file named “Utils.js”, we’ll be setting up the library, file fetching, canvas display and camera initialization process. The second file would focus solely on defining the logic of Haar cascade and related operations.

function Utils(errorOutputId) { 
let self = this;
this.errorOutput = document.getElementById(errorOutputId);

Initializing a Utils(utility) Class and assigning an error output element to display the error messages.

const OPENCV_URL = 'opencv.js';
this.loadOpenCv = function(onloadCallback) {
let script = document.createElement('script');
script.setAttribute('async', '');
script.setAttribute('type', 'text/javascript');
script.addEventListener('load', () => {
// Checking the loading of the buildInformation
if (cv.getBuildInformation)
{
console.log(cv.getBuildInformation());
onloadCallback();
}
else
{
// Web Assembly check
cv['onRuntimeInitialized']=()=>{
console.log(cv.getBuildInformation());
onloadCallback();
}
}
});
script.addEventListener('error', () => {
self.printError('Failed to load ' + OPENCV_URL);
});
script.src = OPENCV_URL;
let node = document.getElementsByTagName('script')[0];
node.parentNode.insertBefore(script, node);
};

Dynamically loading the OpenCV.js library in the browser. It creates a Script element and sets some required attributes. Event Listeners are added to handle the error reports and the loading of the script. The conditional statement “if-else” checks the “cv.getBuildInformation” if it exists or not. If it checks positive, the onloadCallback() function is invoked else it indicates the library is using Web-Assembly(WASP) so it creates an “onRuntimeInitialized” callback to be called when Runtime is initialized. In short the above snippets ensures appropriate loading of the opencv.js library.

this.createFileFromUrl = function(path, url, callback) {
let request = new XMLHttpRequest();
request.open('GET', url, true);
request.responseType = 'arraybuffer';
request.onload = function(ev) {
if (request.readyState === 4) {
if (request.status === 200) {
let data = new Uint8Array(request.response);
cv.FS_createDataFile('/', path, data, true, false, false);
callback();
} else {
self.printError('Failed to load ' + url + ' status: ' + request.status);
}
}
};
request.send();
};

This snippet is useful for fetching image or video data from external sources, but we’ll be using the live webcam to capture the results in Realtime. The createFileFromUrl method fetches a file from a given URL and creates a virtual file in the virtual file system. It uses an XMLHttpRequest to retrieve the file as an array buffer. On successful retrieval, it converts the response to a Uint8Array and creates the virtual file using “cv.FS_createDataFile”. The callback function is invoked once the file creation is complete.

this.loadImageToCanvas = function(url, cavansId) {
let canvas = document.getElementById(cavansId);
let ctx = canvas.getContext('2d');
let img = new Image();
img.crossOrigin = 'anonymous';
img.onload = function() {
canvas.width = img.width;
canvas.height = img.height;
ctx.drawImage(img, 0, 0, img.width, img.height);
};
img.src = url;
};

The above snippet is responsible for creating a canvas object to display the image. The “loadImageToCanvas” function takes two elements i.e., a url and a canvas Id. The let canvas variable fetches the canvas element from the html document using the “canvasId” passed as a parameter. To draw on the canvas, we’ll use the .getContext(‘2d’) method to define the rendering context. We’ll then create an image object to represent the image that’ll be loaded, followed by defining crossOrigin property to ‘anonymous’. This ensures that images can be accessed from different domains. Then we have an onload event handler, that defines the post-loading events for an image. The succeeding lines equal the image width and height to that of the canvas, followed by a drawImage function to overlay the image on a specified coordinate and shape. Finally the img.src fetches the image from the url.

this.executeCode = function(textAreaId) {
try {
this.clearError();
let code = document.getElementById(textAreaId).value;
eval(code);
} catch (err) {
this.printError(err);
}
};
this.clearError = function() {
this.errorOutput.innerHTML = '';
};
this.printError = function(err) {
if (typeof err === 'undefined') {
err = '';
} else if (typeof err === 'number') {
if (!isNaN(err)) {
if (typeof cv !== 'undefined') {
err = 'Exception: ' + cv.exceptionFromPtr(err).msg;
}
}
} else if (typeof err === 'string') {
let ptr = Number(err.split(' ')[0]);
if (!isNaN(ptr)) {
if (typeof cv !== 'undefined') {
err = 'Exception: ' + cv.exceptionFromPtr(ptr).msg;
}
}
} else if (err instanceof Error) {
err = err.stack.replace(/\n/g, '<br>');
}
this.errorOutput.innerHTML = err;
};

This part of the code is intended for exception handling. The excuteCode function takes “textAreaId” as a parameters and executes the code while the catch block holds any error associated with the execution. The clearError function set the errorOutput element to an empty string. The entire snippet follows the same evaluation format by catching any errors that occur, and formats the error messages based on the type of error encountered before displaying them in the error output element.

this.loadCode = function(scriptId, textAreaId) {
let scriptNode = document.getElementById(scriptId);
let textArea = document.getElementById(textAreaId);
if (scriptNode.type !== 'text/code-snippet') {
throw Error('Unknown code snippet type');
}
textArea.value = scriptNode.text.replace(/^\n/, '');
};

The above code snippet defines a function that loads code from a script element and populates textArea element with that code. the loadCode function takes two parameters i.e., the scriptId and the textAreaId which are later fetched from html and stored in scriptNode and textArea using the .getElementById. Later an exception check is initiated by ensuring the scriptNode.type to be code-snippet. The text content of the script element after successful conformation gets assigned to the value property of the textArea.

function onVideoCanPlay() {
if (self.onCameraStartedCallback) {
self.onCameraStartedCallback(self.stream, self.video);
}
};

This snippet defines a function onVideoCanPlay() that is typically used as an event handler. When the video can be played, it checks if the “onCamerStartedCallback” function is defined. If it is, the “onCamerStartedCallback” function is called, passing the self.stream and self.video as arguments. The purpose of this function is to trigger a callback function when the video is ready to play, allowing further actions or processing to be performed.

this.startCamera = function(resolution, callback, videoId) {
const constraints = {
'qvga': {width: {exact: 320}, height: {exact: 240}},
'vga': {width: {exact: 640}, height: {exact: 480}}};
let video = document.getElementById(videoId);
if (!video) {
video = document.createElement('video');
}
let videoConstraint = constraints[resolution];
if (!videoConstraint) {
videoConstraint = true;
}
navigator.mediaDevices.getUserMedia({video: videoConstraint, audio: false})
.then(function(stream) {
video.srcObject = stream;
video.play();
self.video = video;
self.stream = stream;
self.onCameraStartedCallback = callback;
video.addEventListener('canplay', onVideoCanPlay, false);
})
.catch(function(err) {
self.printError('Camera Error: ' + err.name + ' ' + err.message);
});
};
this.stopCamera = function() {
if (this.video) {
this.video.pause();
this.video.srcObject = null;
this.video.removeEventListener('canplay', onVideoCanPlay);
}
if (this.stream) {
this.stream.getVideoTracks()[0].stop();
}
};
};

The code defines two functions, “startCamera” and “stopCamera”, which are responsible for controlling the camera stream.

The “startCamera” function takes three parameters: “resolution”, “callback”, and “videoId”. It sets up the desired resolution options and creates a video element for displaying the camera stream. It then uses the “navigator.mediaDevices.getUserMedia” method to request access to the camera’s video stream. If the request is successful, the stream is assigned to the video element’s “srcObject” property, and the stream starts playing. The function also assigns the video element, stream, and callback to corresponding properties for later use. Additionally, an event listener is added to the video element for the “canplay” event, which triggers the onVideoCanPlay function.

On the other hand, the “stopCamera” function doesn’t take any parameters. It checks if the video element exists and, if so, pauses the video playback, clears the srcObject property to stop the camera stream, and removes the canplay event listener. Furthermore, if the stream exists, it stops the video tracks associated with the stream.

With this comes the end of our utils.js file, now we’ll move towards the haar.js file to finalize our Haar detector. But before that, here’s the entire snippet of the Utils.js.

function Utils(errorOutputId) { 
let self = this;
this.errorOutput = document.getElementById(errorOutputId);
const OPENCV_URL = 'opencv.js';
this.loadOpenCv = function(onloadCallback) {
let script = document.createElement('script');
script.setAttribute('async', '');
script.setAttribute('type', 'text/javascript');
script.addEventListener('load', () => {
if (cv.getBuildInformation)
{
console.log(cv.getBuildInformation());
onloadCallback();
}
else
{
cv['onRuntimeInitialized']=()=>{
console.log(cv.getBuildInformation());
onloadCallback();
}
}
});
script.addEventListener('error', () => {
self.printError('Failed to load ' + OPENCV_URL);
});
script.src = OPENCV_URL;
let node = document.getElementsByTagName('script')[0];
node.parentNode.insertBefore(script, node);
};
this.createFileFromUrl = function(path, url, callback) {
let request = new XMLHttpRequest();
request.open('GET', url, true);
request.responseType = 'arraybuffer';
request.onload = function(ev) {
if (request.readyState === 4) {
if (request.status === 200) {
let data = new Uint8Array(request.response);
cv.FS_createDataFile('/', path, data, true, false, false);
callback();
} else {
self.printError('Failed to load ' + url + ' status: ' + request.status);
}
}
};
request.send();
};
this.loadImageToCanvas = function(url, cavansId) {
let canvas = document.getElementById(cavansId);
let ctx = canvas.getContext('2d');
let img = new Image();
img.crossOrigin = 'anonymous';
img.onload = function() {
canvas.width = img.width;
canvas.height = img.height;
ctx.drawImage(img, 0, 0, img.width, img.height);
};
img.src = url;
};
this.executeCode = function(textAreaId) {
try {
this.clearError();
let code = document.getElementById(textAreaId).value;
eval(code);
} catch (err) {
this.printError(err);
}
};
this.clearError = function() {
this.errorOutput.innerHTML = '';
};
this.printError = function(err) {
if (typeof err === 'undefined') {
err = '';
} else if (typeof err === 'number') {
if (!isNaN(err)) {
if (typeof cv !== 'undefined') {
err = 'Exception: ' + cv.exceptionFromPtr(err).msg;
}
}
} else if (typeof err === 'string') {
let ptr = Number(err.split(' ')[0]);
if (!isNaN(ptr)) {
if (typeof cv !== 'undefined') {
err = 'Exception: ' + cv.exceptionFromPtr(ptr).msg;
}
}
} else if (err instanceof Error) {
err = err.stack.replace(/\n/g, '<br>');
}
this.errorOutput.innerHTML = err;
};
this.loadCode = function(scriptId, textAreaId) {
let scriptNode = document.getElementById(scriptId);
let textArea = document.getElementById(textAreaId);
if (scriptNode.type !== 'text/code-snippet') {
throw Error('Unknown code snippet type');
}
textArea.value = scriptNode.text.replace(/^\n/, '');
};
this.addFileInputHandler = function(fileInputId, canvasId) {
let inputElement = document.getElementById(fileInputId);
inputElement.addEventListener('change', (e) => {
let files = e.target.files;
if (files.length > 0) {
let imgUrl = URL.createObjectURL(files[0]);
self.loadImageToCanvas(imgUrl, canvasId);
}
}, false);
};
function onVideoCanPlay() {
if (self.onCameraStartedCallback) {
self.onCameraStartedCallback(self.stream, self.video);
}
};
this.startCamera = function(resolution, callback, videoId) {
const constraints = {
'qvga': {width: {exact: 320}, height: {exact: 240}},
'vga': {width: {exact: 640}, height: {exact: 480}}};
let video = document.getElementById(videoId);
if (!video) {
video = document.createElement('video');
}
let videoConstraint = constraints[resolution];
if (!videoConstraint) {
videoConstraint = true;
}
navigator.mediaDevices.getUserMedia({video: videoConstraint, audio: false})
.then(function(stream) {
video.srcObject = stream;
video.play();
self.video = video;
self.stream = stream;
self.onCameraStartedCallback = callback;
video.addEventListener('canplay', onVideoCanPlay, false);
})
.catch(function(err) {
self.printError('Camera Error: ' + err.name + ' ' + err.message);
});
};
this.stopCamera = function() {
if (this.video) {
this.video.pause();
this.video.srcObject = null;
this.video.removeEventListener('canplay', onVideoCanPlay);
}
if (this.stream) {
this.stream.getVideoTracks()[0].stop();
}
};
};

Haar.js

As we’re done with the utility setup, we’ll code the detection part by creating a file named haar.js. We’ll start the coding now, without any delays!

let isFaceDetection = true; // Flag to indicate whether face detection is enabled
function switchDetection() {
isFaceDetection = !isFaceDetection; // Toggle the flag
}

The above code snippet initializes with a detection check for the face strictly followed by a toggle function switchDetection(). This switch function is used to add an additional toggling feature in the application, such that the user can navigate between the face detection or the face and eyes detection by clicking the switch button. The function of the switch button is thoroughly explained in the code below.

function addNavigationButtons() {
// Create a button for switching detection
let switchButton = document.createElement('button');
switchButton.textContent = 'Switch Detection';
switchButton.addEventListener('click', switchDetection);
// Append the button to the body
// Get the button container element
let buttonContainer = document.getElementById('buttonContainer');
// Append the button to the button container
buttonContainer.appendChild(switchButton);
}

This code is quite simple to follow. The addNavigationButtons() function first creates a “switchButton” variable to create a button element and add in the text content of choice. The next line is the event handling for the switch button by passing in the event name ‘click’ and the callback function ‘switchDetection()’ that we created above. Next we fetch the button container through the id defined in the html and store it inside the buttonContainer variable. Don’t worry we’ll get to see the html part in the final stages of this project. The last thing we’ll do is add our switchButton as a child element to the fetched “buttonContainer” element from the html. This is done to allow the access inside the html editor for style or other required changes to this button element.

function openCvReady() {
cv['onRuntimeInitialized'] = () => {
let video = document.getElementById("cam_input"); // video is the id of video tag
navigator.mediaDevices.getUserMedia({ video: true, audio: false })
.then(function(stream) {
video.srcObject = stream;
video.play();
})
.catch(function(err) {
console.log("An error occurred! " + err);
});
let src = new cv.Mat(video.height, video.width, cv.CV_8UC4);
let dst = new cv.Mat(video.height, video.width, cv.CV_8UC1);
let gray = new cv.Mat();
let cap = new cv.VideoCapture(cam_input);
let faces = new cv.RectVector();
let eyes = new cv.RectVector();
let faceClassifier = new cv.CascadeClassifier();
let eyeClassifier = new cv.CascadeClassifier();
let utils = new Utils('errorMessage');
let faceCascadeFile = 'haarcascade_frontalface_default.xml'; // path to face cascade xml
let eyeCascadeFile = 'haarcascade_eye.xml'; // path to eye cascade xml
utils.createFileFromUrl(faceCascadeFile, faceCascadeFile, () => {
faceClassifier.load(faceCascadeFile); // in the callback, load the face cascade from file
});
utils.createFileFromUrl(eyeCascadeFile, eyeCascadeFile, () => {
eyeClassifier.load(eyeCascadeFile); // in the callback, load the eye cascade from file
});
const FPS = 24;

Here the opencvReady() function is a callback which initiates with the loading of the OpenCV library. onRuntimeInitialized is an event that is triggered when the library is initialized and a class-back function is assigned to it using an arrow. Inside the callback the video element is retrieved using the ID “cam_input”. Then a promise based function “navigator.mediaDevices.getUserMedia” is used to request access to user’s camera which upon access returns a promise using then() function. It assigns the stream to the “srcObject” property of the video element and starts playing the video. The catch( ) method is used to log errors. Next, several variables and objects are initialized. These include src , dst and gray, which are instances of OpenCV matrices used for image processing. cap is a VideoCapture object that provides access to video frames. faces and eyes are RectVector objects that will store the detected faces and eyes, respectively. faceClassifier and eyesClassifier are instances of CascadeClassifier , which are used to load and apply trained cascade classifiers for face and eye detection. An instance of the Utils class is created, which provides utility functions. The constructor of Utils takes an argument, “errorMessage”, which is the ID of an HTML element used to display error messages. The utils.createFileFromUrl() function is called to asynchronously load the cascade classifier XML files. It takes the file path, a callback function, and optional parameters. In the callback function, the XML file is loaded using faceClassifer.load() and eyeClassifier.load() methods. Finally, FPS is set to 24, indicating the desired frames per second for video processing.

function processVideo() {
let begin = Date.now();
cap.read(src);
src.copyTo(dst);
cv.cvtColor(dst, gray, cv.COLOR_RGBA2GRAY, 0);
if (isFaceDetection) {
try {
faceClassifier.detectMultiScale(gray, faces, 1.1, 3, 0);
console.log(faces.size());
} catch (err) {
console.log(err);
}
for (let i = 0; i < faces.size(); ++i) {
let face = faces.get(i);
let point1 = new cv.Point(face.x, face.y);
let point2 = new cv.Point(face.x + face.width, face.y + face.height);
cv.rectangle(dst, point1, point2, [255, 0, 0, 255]);
}
} else {
try {
faceClassifier.detectMultiScale(gray, faces, 1.1, 3, 0);
eyeClassifier.detectMultiScale(gray, eyes, 1.1, 3, 0);
console.log(eyes.size());
} catch (err) {
console.log(err);
}
for (let i = 0; i < faces.size(); ++i) {
let face = faces.get(i);
let point1 = new cv.Point(face.x, face.y);
let point2 = new cv.Point(face.x + face.width, face.y + face.height);
cv.rectangle(dst, point1, point2, [255, 0, 0, 255]);
}
for (let i = 0; i < eyes.size(); ++i) {
let eye = eyes.get(i);
let point1 = new cv.Point(eye.x, eye.y);
let point2 = new cv.Point(eye.x + eye.width, eye.y + eye.height);
cv.rectangle(dst, point1, point2, [0, 255, 0, 255]);
}
}
cv.imshow("canvas_output", dst);
// schedule next one.
let delay = 1000 / FPS - (Date.now() - begin);
setTimeout(processVideo, delay);
}
// Add navigation buttons
addNavigationButtons();
// schedule first one.
setTimeout(processVideo, 0);
};

The “processVideo()” function is responsible for processing video frames in real-time. It starts by capturing a frame from the video source and converting it to grayscale. The function then determines whether to perform face detection or both face and eye detection based on the “isFaceDetection” flag. If face detection is enabled, it uses a face classifier to detect faces in the frame and draws rectangles around them. If both face and eye detection are enabled, it also uses an eye classifier to detect eyes within the detected faces and draws rectangles around them. The processed frame with the drawn rectangles is displayed on a canvas. The function then calculates the delay required to achieve a desired frame rate and schedules the next frame processing using setTimeout(). Additionally, the addNavigationButtons() function is called to add buttons for toggling between face and eye detection. Overall, this code allows for real-time video processing with face and eye detection capabilities.

With this comes the end of the JavaScripting part. Before moving forward to the html let’s just take a look at the entire Haar.js part.

let isFaceDetection = true; // Flag to indicate whether face detection is enabled
function switchDetection() {
isFaceDetection = !isFaceDetection; // Toggle the flag
}
function addNavigationButtons() {
// Create a button for switching detection
let switchButton = document.createElement('button');
switchButton.textContent = 'Switch Detection';
switchButton.addEventListener('click', switchDetection);
// Append the button to the body
// Get the button container element
let buttonContainer = document.getElementById('buttonContainer');
// Append the button to the button container
buttonContainer.appendChild(switchButton);
}
function openCvReady() {
cv['onRuntimeInitialized'] = () => {
let video = document.getElementById("cam_input"); // video is the id of video tag
navigator.mediaDevices.getUserMedia({ video: true, audio: false })
.then(function(stream) {
video.srcObject = stream;
video.play();
})
.catch(function(err) {
console.log("An error occurred! " + err);
});
let src = new cv.Mat(video.height, video.width, cv.CV_8UC4);
let dst = new cv.Mat(video.height, video.width, cv.CV_8UC1);
let gray = new cv.Mat();
let cap = new cv.VideoCapture(cam_input);
let faces = new cv.RectVector();
let eyes = new cv.RectVector();
let faceClassifier = new cv.CascadeClassifier();
let eyeClassifier = new cv.CascadeClassifier();
let utils = new Utils('errorMessage');
let faceCascadeFile = 'haarcascade_frontalface_default.xml'; // path to face cascade xml
let eyeCascadeFile = 'haarcascade_eye.xml'; // path to eye cascade xml
utils.createFileFromUrl(faceCascadeFile, faceCascadeFile, () => {
faceClassifier.load(faceCascadeFile); // in the callback, load the face cascade from file
});
utils.createFileFromUrl(eyeCascadeFile, eyeCascadeFile, () => {
eyeClassifier.load(eyeCascadeFile); // in the callback, load the eye cascade from file
});
const FPS = 24;
function processVideo() {
let begin = Date.now();
cap.read(src);
src.copyTo(dst);
cv.cvtColor(dst, gray, cv.COLOR_RGBA2GRAY, 0);
if (isFaceDetection) {
try {
faceClassifier.detectMultiScale(gray, faces, 1.1, 3, 0);
console.log(faces.size());
} catch (err) {
console.log(err);
}
for (let i = 0; i < faces.size(); ++i) {
let face = faces.get(i);
let point1 = new cv.Point(face.x, face.y);
let point2 = new cv.Point(face.x + face.width, face.y + face.height);
cv.rectangle(dst, point1, point2, [255, 0, 0, 255]);
}
} else {
try {
faceClassifier.detectMultiScale(gray, faces, 1.1, 3, 0);
eyeClassifier.detectMultiScale(gray, eyes, 1.1, 3, 0);
console.log(eyes.size());
} catch (err) {
console.log(err);
}
for (let i = 0; i < faces.size(); ++i) {
let face = faces.get(i);
let point1 = new cv.Point(face.x, face.y);
let point2 = new cv.Point(face.x + face.width, face.y + face.height);
cv.rectangle(dst, point1, point2, [255, 0, 0, 255]);
}
for (let i = 0; i < eyes.size(); ++i) {
let eye = eyes.get(i);
let point1 = new cv.Point(eye.x, eye.y);
let point2 = new cv.Point(eye.x + eye.width, eye.y + eye.height);
cv.rectangle(dst, point1, point2, [0, 255, 0, 255]);
}
}
cv.imshow("canvas_output", dst);
// schedule next one.
let delay = 1000 / FPS - (Date.now() - begin);
setTimeout(processVideo, delay);
}
// Add navigation buttons
addNavigationButtons();
// schedule first one.
setTimeout(processVideo, 0);
};
}

Html

We’ll now be coding the html part for basic interfacing and getting things together.

<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Opencv JS</title>
<style>
body {
font-family: Arial, sans-serif;
background-color: #f4f4f4;
margin: 0;
padding: 0;
display: flex;
justify-content: center;
align-items: center;
height: 100vh;
flex-direction: column;
}
h2, h3 {
text-align: center;
color: #333;
margin: 0;
padding: 10px;
}
video, canvas {
display: block;
margin: 0 auto;
background-color: #000;
box-shadow: 0 2px 6px rgba(0, 0, 0, 0.3);
}
#buttonContainer {
padding: 7px 15px;
font-size: 10px;
background-color: #4CAF50;
color: #fff;
border: none;
border-radius: 2px;
cursor: pointer;
transition: background-color 0.3s ease;
}
#buttonContainer:hover {
background-color: #45a049;
}
</style>
<script async src="js/opencv.js" onload="openCvReady();"></script>
<script src="js/haar.js"></script>
<script src="js/utils.js"></script>
</head>
<body>
<video id="cam_input" height="320" width="480"></video>
<canvas id="canvas_output"></canvas>
<h2>Haar Cascade Based Face and Eyes Detection</h2>
<h3>Click the button to toggle between Simple Face Detection and Face + Eyes Detection.</h3>
<div id="buttonContainer">
</div>
</body>
</html>

The code is basic html coding and nothing so complex to need explanation. However, its important to understand the path to the opencv.js library file that we downloaded earlier. As you can see, the opencv.js, haar.js and utils.js files are all in the same folder “js”, whereas the html file for the above code is placed alongside the “js” directory. You can adjust the paths according to your requirements.

Below is the output for both the face detection and face and eyes detection. Pardon the background

Fig — 02 Image By author. (Haar Cascade Face Detection)
Fig — 03 Image By author. (Haar Cascade Face and Eyes Detection)

Download the entire code and auxiliary files from my Github repo.

Conclusion

In this tutorial we implemented Haar cascade face and eye detection using opencv.js on our web application. We also went through the details of the code in order to learn the purpose of each method. This way we’ll be learning both computer vision and web development. I hope this tutorial adds some value to your learning journey and if some stuff is missed or you have some suggestions, do let me know. If you want the code and the related files do visit my Github for this project.

Till next time….. Stay Blessed!

--

--