Face Detection in Unity using OpenCV

Nirmal Mendis
Geek Culture
Published in
5 min readSep 19, 2021
Face Detection
Image by Tumisu from Pixabay

Face detection is the process of using algorithms to identify faces in images. There are quite a few such algorithms and the ‘Haar cascades’ algorithm provided in OpenCV is a widely used technique.

In this article, I will be developing a Unity application for face detection which can be run on Android as well.

Let’s break down the steps to two main parts:

  1. Setting Up
  2. Coding

Setting Up

Before jumping into implementing the code, we need to set up a few things.

  1. First, let’s create a Unity project and add a RawImage to the UI Canvas. I have changed the canvas size from free aspect to 4:3 in the game window and resized the RawImage as well. The setup should look as follows:
Unity OpenCV face detection
  1. Next, we need to import OpenCV into our project.

This asset can be found here:

OpenCV for Unity | Integration | Unity Asset Store.

3. Next, we need to include ‘haarcascade_frontalface_default.xml’ to our project. To do this, first download the file in ‘.xml’ format from OpenCv’s GitHub repository and then create a folder named ‘StreamingAssets’ in the Assets folder of the Unity project and include the above xml file in this folder.

Tip: There are several other pre-trained classification algorithms provided in the OpenCV github repository for face detection algorithms. You can try them as well.

3. Next, let’s create a C# script and name it as ‘FaceDetection.cs’ and open it in Visual Studio or your preferred IDE.

Now, we have completed setting up the required configuration to start coding. Let’s move on to the next section.

2. Coding

First, we need to import the following packages:

using OpenCVForUnity.CoreModule;
using OpenCVForUnity.ImgprocModule;
using OpenCVForUnity.ObjdetectModule;
using OpenCVForUnity.UnityUtils;
using UnityEngine;
using UnityEngine.UI;

Next, we need variables for the WebCamTexture and variables to store the xml file name, a ‘CascadeClassfier’ variable to load the xml data, a ‘MatOfRect’ variable to store the faces extracted, a ‘Texture2D’ variable to render the camera view with the detected faces to the RawImage and two Mats to store the rgb frames and grayscale frames. Let’s create these member variables as follows:

private WebCamTexture webcamTexture;
private string filename;
private CascadeClassifier cascade;
private MatOfRect faces;
private Texture2D texture;
private Mat rgbaMat;
private Mat grayMat;

Now, let’s create a function to initialize the above variables. Note that we will be adding this script to the RawImage in the UI. Therefore we can directly access the RawImage from the code. Therefore, first, we obtain the RectTransform component of the RawImage and we will be rotating the RawImage according to the rotation of the WebCamTexture. This is done because on Android devices the camera feed is rotated. Next, we will be loading the xml file data into the cascade variable and initializing the other variables as follows:

void initializeData()
{
//rotate RawImage according to rotation of webcamtexture
this.GetComponent<RectTransform>().Rotate(new Vector3(0, 0, 360 - webcamTexture.videoRotationAngle));
//store name of xml file
filename = "haarcascade_frontalface_default.xml";
//initaliaze cascade classifier
cascade = new CascadeClassifier();
//load the xml file data
cascade.load(Utils.getFilePath(filename));
//initalize faces matofrect
faces = new MatOfRect();
//initialize rgb and gray Mats
rgbaMat = new Mat(webcamTexture.height, webcamTexture.width, CvType.CV_8UC4);
grayMat = new Mat(webcamTexture.height, webcamTexture.width, CvType.CV_8UC4);

//initialize texture2d
texture = new Texture2D(rgbaMat.cols(), rgbaMat.rows(), TextureFormat.RGBA32, false);
}

Next, we need to obtain the camera devices available, initialize the WebCamTexture and call the above ‘initializeData()’ function. Let’s do this in the ‘Start()’ method.

void Start()
{
//obtain cameras avialable
WebCamDevice[] cam_devices = WebCamTexture.devices;
//create camera texture
webcamTexture = new WebCamTexture(cam_devices[0].name, 480, 640, 30);
//start camera
webcamTexture.Play();
// initialize members;
initializeData();
}

Next, we need to perform the face detection to each frame of the camera feed. To do this, we can take use of the ‘Update()’ method which is executed per each frame when the application is running.

First, we need to convert the webcamTexture into a Mat. To do this we use the ‘Utils.webCamTextureToMat()’ method provided by OpenCV. Next, we convert this Mat into grayscale using ‘Imgproc.cvtColor()’ method. After that, we can extract the faces detected using the ‘cascade.detectMultiScale()’ method and draw rectangles around them using ‘Imgproc.rectangle()’ method. Finally, we can convert this Mat to a texture using the’Utils.fastMatToTexture2D()’ method and render it to the RawImage. Let’s see how we can do this.

void Update()
{
//convert webcamtexture to rgb mat
Utils.webCamTextureToMat(webcamTexture, rgbaMat);
//convert rgbmat to grayscale
Imgproc.cvtColor(rgbaMat, grayMat, Imgproc.COLOR_RGBA2GRAY);
//extract faces
cascade.detectMultiScale(grayMat, faces, 1.1, 4);
//store faces in array
OpenCVForUnity.CoreModule.Rect[] rects = faces.toArray();
//draw rectangle over all faces
for (int i = 0; i < rects.Length; i++)
{
Debug.Log("detect faces " + rects[i]);
Imgproc.rectangle(rgbaMat, new Point(rects[i].x, rects[i].y), new Point(rects[i].x + rects[i].width, rects[i].y + rects[i].height), new Scalar(255, 0, 0, 255), 2);
}
//convert rgb mat back to texture
Utils.fastMatToTexture2D(rgbaMat, texture);

//set rawimage texture
this.GetComponent<RawImage>().texture = texture;
}

Now we have completed the coding section as well. All you have to do now is to attach this script to the RawImage as a component and run the application.

Here’s my result:

Unity OpenCV multiple face detection
Image used - Friend Student Graduate — Free photo on Pixabay

(I have used my phone to show an image to the webcam from pixabay so that the detection of multiple faces can be seen).

Finally, to create an Android app out of this application, switch the platform to Android from player settings, resize the raw Image as you wish and build and run the application.

Here’s my result:

Unity OpenCV Android Face detection
Image Used — Woman Fashion Beauty — Free photo on Pixabay

That’s it for this project. Please let me know your feedback. Thank you! Cheers!😀

References

OpenCVForUnity/FaceDetectionWebCamTextureExample.cs at master · EnoxSoftware/OpenCVForUnity · GitHub

Face Detection in 2 Minutes using OpenCV & Python | by Adarsh Menon | Towards Data Science

--

--