Face Detection in IOS

Being sick over the past two days prevented me from going to work. Therefore, I decided to play with OpenCV on iOS and I ended up with this project, which I want to share with you guys :)

The idea is to use Haar feature-based cascade classifiers, which is implemented in OpenCV and used for face detection. Nothing new, I have just put the pieces together and reached the results below.

In this article, you can find all steps for this to work. I also provided a link to the source I used with each step.


The main steps for this project:

  1. Import and configure OpenCV library to xcode project.
  2. Build a camera frame extractor to get a frame every time is available.
  3. Apply detectMultiScale to detect faces and show the results.

OpenCV with IOS project:

  1. Download the last version of OpenCV. You can find it here, and then decompress it.
  2. Create a new xcode project.
  3. Drag & drop the opencv2.framework file inside your project (You can create libs folder and drop the file inside).

4. Go to Linked Frameworks and Libraries and make sure that OpenCV is linked. If not, press (+) -> add other… -> add it. You need also to add the libraries shown below.

5. Go to Build Settings ->Framework Search Paths and make sure that path of opencv2.framework is correct. e.g. $(PROJECT_DIR)/face_detection/libs.

6. Create new group (folder) and name it classes.

7. Inside classes folder do: File -> New -> File… -> Coca Touch Class 
-> name: OpencvWrapper , Subclass of : NSObject, Language : Objective-C 
-> Next.

8. Rename OpencvWrapper.m to OpencvWrapper.mm

9. Press “Create Bridging Header” and Opencvproject-Bridging-Header.h will be created automatically.

10. Add to Opencvproject-Bridging-Header.h.

11. Inside classes folder do: File -> New -> File… -> PCH File.

12. Add the following to the new PCH File:

13. Go Build Settings -> Prefix Header, add the path for PrefixHeader.pch file e.g. $(SRCROOT)/face_detection/classes/prefixHeader.pch.

Now we are done with the first step and you should be able to build the project successfully! For more information and screenshots regarding this step, check this detailed article, which I used.


Camera frame extraction:

For this step, simply drag & drop FrameExtractor.swift into classes folder. 
More details?, go to this amazing article (from which I got FrameExtractor.swift) :) .

P.S. Do not forget to add Privacy — Camera Usage Description to info.plist .


Face detection:

Now, we are going to use Haar feature-based cascade classifiers in OpenCV to detect faces. (How exciting!)

  1. Create new group (folder) and call it res.
  2. Drag&drop the following files inside res:
     a. Haarcascade_eye.xml.
     b. Haarcascade_frontalface_default.xml.

Now the project structure should be similar to the this:

Let’s add the missing parts and run the project:

  1. OpencvWrapper.h:

2. PrefixHeader.pch:

3. OpencvWrapper.mm:
 The main code is in detect function. detect takes UIImage as a parameter and returns UIImage which contains circles around the faces and eyes.
detect consists of 3 parts:
 1. Converts UIImage to cv::Mat.
 2. Applies the classifier and draws circles.
 3. Converts the result to UIImage.

4. Add a UIImageView component to your storyboard or nib file and connect it with viewController.

5. Add a UIButton component and set the tittle to flip camera.

6. ViewController.swift:

The moment of truth :D, build + run


Bye Bye:

You can find the whole code in this repo. The purpose of this article is a quick implementation of the algorithm and not digging in details.

Haar feature-based cascade classifiers is not a state-of-art algorithm but it works :) 
Anyway stay tuned because more advanced topics are on the way :) .