Build a Face Detection App with Huawei ML Kit

Oğuzhan Demirci
Huawei Developers
Published in
5 min readAug 13, 2020

Hi all,

In the era of powerful mobile devices we store thousands of photos, have video calls, shop, manage our bank accounts and perform many other tasks in the palm of our hands. We can also manage to take photos and videos or have video calls with our cameras integrated on our mobile phones. But, it would be inefficient to only use these cameras we carry by ourselves all day to take raw photos and videos. They can perform more, much more.

Face detection services are used by many applications in different industries. It is mainly used for security and entertainment purposes. For example, it can be used by a taxi app to identify its taxi’s customer, it can be used by a smart home app to identify guests’ faces and announce to the host who the person ringing the bell at the door is, it can be used to draw moustache on faces in an entertainment app or it can be used to detect if a drivers eyes are open or not and warn our driver if his/her eyes closed.

In contrary to how many different areas face detection can be used and how important tasks this service performs, it is really easy to implement a face detection app with the help of Huawei ML Kit. As it’s a device side capability that works on all Android devices with ARM architecture, it is completely free, faster and more secure than other services. The face detection service can detect the shapes and features of your user’s face, including their facial expression, age, gender, and wearing.

With face detection service you can detect up to 855 face contour points to locate face coordinates including face contour, eyebrows, eyes, nose, mouth, and ears, and identify the pitch, yaw, and roll angles of a face. You can detect seven facial features including the possibility of opening the left eye, possibility of opening the right eye, possibility of wearing glasses, gender possibility, possibility of wearing a hat, possibility of wearing a beard, and age. In addition to there, you can also detect facial expressions, namely, smiling, neutral, anger, disgust, fear, sadness, and surprise.

Let’s start to build our demo application step by step from scratch!

1.Firstly, let’s create our project on Android Studio. We can create a project selecting Empty Activity option and then follow the steps described in this post to create and sign our project in App Gallery Connect. You can follow this guide.

2. Secondly, In HUAWEI Developer AppGallery Connect, go to Develop > Manage APIs. Make sure ML Kit is activated.

3. Now we have integrated Huawei Mobile Services (HMS) into our project. Now let’s follow the documentation on developer.huawei.com and find the packages to add to our project. In the website click Developer > HMS Core > AI > ML Kit. Here you will find introductory information to services, references, SDKs to download and others. Under ML Kit tab follow Android > Getting Started > Integrating HMS Core SDK > Adding Build Dependencies > Integrating the Face Detection SDK. We can follow the guide here to add face detection capability to our project. To later go round and learn more about this service I added all of the three packages shown here. You can only choose the base SDK or select packages according to your needs. After the integration your app-level build.gradle file will look like this.

And your project-level build.gradle file will look like this.

Don’t forget to add the following meta-data tags in your AndroidManifest.xml. This is for automatic update of the machine learning model.

4. Now we can select detecting faces on a static image or on a camera stream. Let’s choose detecting faces on a camera stream for this example. Firstly, let’s create our analyzer. Its type is MLFaceAnalyzer. It is responsible for analyzing the faces detected. Here is the sample implementation. We can also use our MLFaceAnalyzer with default settings to make it simple.

5. Create a simple layout. Use two surfaceViews, one above the other, for camera frames and for our overlay. Because, later we will draw some shapes on our overlay view. Here is a sample layout.

6. We should prepare our views. We need two surfaceHolders in our application. We are going to make surfaceHolderOverlay transparent, because we want to see our camera frames. Later we are going to add a callback to our surfaceHolderCamera to know when it’s created, changed and destroyed. Let’s create them.

7. Now we can create our LensEngine. It is a magic class that handles camera frames for us. You can set different settings to your LensEngine. Here is how you can create it simply. As you can see in the example, the order of width and height passed to LensEngine changes according to orientation. We can create our LensEngine inside surfaceChanged method of surfaceHolderCallback and release it inside surfaceDestroyed. Here is an example of creating and running LensEngine. LensEngine needs a surfaceHolder or a surfaceTexture to run on.

8. We also need somewhere to receive detected results and interact with them. For this purpose we create our FaceAnalyzerTransactor class, you can name it as you wish. It should implement MLAnalyzer.MLTransactor<MLFace> interface. We are going to set an overlay which is of type SurfaceHolder, get our canvas from this overlay and draw some shapes on the canvas. We have the required data about the detected face inside transactResult method. Here is the sample implementation of the whole of our FaceAnalyzerTransactor class.

9. Create a FaceAnalyzerTransactor instance in MainActivity and use it as shown below.

Also, don’t forget to set the overlay of our transactor. We can do this inside surfaceChanged method like this.

10. We are almost done! Don’t forget to ask for permissions from our users. We need CAMERA and WRITE_EXTERNAL_STORAGE permissions. WRITE_EXTERNAL_STORAGE permission is for automatically updating the machine learning model. Add these permissions to your AndroidManifest.xml and ask from user to grant them at runtime. Here is a simple example.

11. Well done! We have finished all the steps and created our project. Now we can test it. Here are some examples.

12. We have created a simple FaceApp which detects faces, face features and emotions. You can produce countless number of types of face detection apps, it is up to your imagination. ML Kit empowers your apps with the power of AI. If you have any questions, please ask through the link below. You can also find this project on Github.

Happy coding!

--

--