Firebase ML Kit 101 : Face Detection

Hitanshu Dhawan
AndroIDIOTS
Published in
5 min readNov 12, 2018

Face Detection is the process of detecting faces in images.

Nowadays, all popular social apps such as Instagram, Snapchat, Facebook etc. uses some sort of face detection technology. It helps them increase their user engagement and improve the overall app experience.

ML Kit’s Face Detection API offers these features…

  • Detecting faces in an image.
  • Identifying key facial features like eyes, nose, mouth, happiness etc.
  • Getting the contours of detected faces and their facial features.

ML Kit’s Face Detection API is designed to work on the device itself which makes it fast, accurate and capable of detecting faces in real time.

Firebase ML Kit Series

In this series of articles, we will deep dive into different ML Kit APIs that it offers…

Let’s look into the ML Kit’s Face Detection API and how we can integrate it into our apps.

ML Kit’s Face Detection

The ML Kit’s Face Detection API provides the following key features.

  • Recognise and locate facial features
    Get the coordinates of the eyes, ears, cheeks, nose, and mouth of every face detected.
recognise and locate facial features
  • Get the contours of facial features
    Get the contours of detected faces and their eyes, eyebrows, lips, and nose.
image courtesy: https://firebase.google.com/docs/ml-kit/detect-faces
  • Recognise facial expressions
    Determine whether a person is smiling or has their eyes closed.
recognise facial expressions
  • Process video frames in real time
    Face detection is performed on the device, and is fast enough to be used in real-time applications, such as video manipulation.
process video frames in real time

Note: Firebase ML Kit is in beta as of January ‘19.

Let’s Code!

Step 1 : Add Firebase to your app

Offcourse! You can add Firebase to your app by following the steps mentioned here.

Step 2 : Include the dependencies

You need to include the ML Kit dependencies in your app-level build.gradle file.

dependencies {
// ...

implementation 'com.google.firebase:firebase-ml-vision:19.0.2'
implementation 'com.google.firebase:firebase-ml-vision-face-model:17.0.2'
}

Step 2.5 : Specify the ML models (optional)

For on-device APIs, you can configure your app to automatically download the ML models after it is installed from the Play Store. Otherwise, the model will be downloaded on the first time you run the on-device detector.

To enable this feature you need to specify your models in your app’s AndroidManifest.xml file.

<application ...>
...
<meta-data
android:name="com.google.firebase.ml.vision.DEPENDENCIES"
android:value="face" />
<!-- To use multiple models: android:value="face,model2" -->
</application>

Step 3 : Get! — the Image

ML Kit provides an easy way to detect faces from variety of image types like Bitmap, media.Image, ByteBuffer, byte[], or a file on the device. You just need to create a FirebaseVisionImage object from the above mentioned image types and pass it to the model.

creating FirebaseVisionImage object from different image types

In my sample app I’ve used byte[] and Bitmap to create FirebaseVisionImage objects.

byte[]

val metadata = FirebaseVisionImageMetadata.Builder()
.setWidth(width)
.setHeight(height)
.setFormat(FirebaseVisionImageMetadata.IMAGE_FORMAT_NV21)
.setRotation(rotation)
.build()

val image = FirebaseVisionImage.fromByteArray(byteArray, metadata)

Bitmap

val image = FirebaseVisionImage.fromBitmap(bitmap)

To create FirebaseVisionImage object from other image types, please refer to the official documentation.

Step 4 : Set! — the Model

Now, It’s time to prepare our Face Detection model.

val detector = FirebaseVision.getInstance().visionFaceDetector

If you want to configure your face detection model according to your needs, you can do that with a FirebaseVisionFaceDetectorOptions object.

// face classification and landmark detection
val options = FirebaseVisionFaceDetectorOptions.Builder()
.setPerformanceMode(FirebaseVisionFaceDetectorOptions.ACCURATE)
.setLandmarkMode(FirebaseVisionFaceDetectorOptions.ALL_LANDMARKS)
.build()
// contour detection
val options = FirebaseVisionFaceDetectorOptions.Builder()
.setContourMode(FirebaseVisionFaceDetectorOptions.ALL_CONTOURS)
.build()
val detector = FirebaseVision.getInstance().getVisionFaceDetector(options)

Here’s a list of all the settings you can configure in your face detection model.

ML Kit’s Face Detection model settings

Step 5 : Gooo!

Finally, we can pass our image to the model for Face Detection.

detector.detectInImage(image)
.addOnSuccessListener {
//
Task completed successfully
}
.addOnFailureListener {
//
Task failed with an exception
}

Step 6 : Extract the information

Voilà! That’s it!
If the face detection was successful, the success listener will receive a list of FirebaseVisionFace objects. Each FirebaseVisionFace object represents a face that was detected and contains all the information related to it.

You can extract all this information like this.

Have a Look!

This is what you can achieve with ML Kit’s Face Detection API.

The full source-code with other ML Kit APIs can be found here!

Thanks for reading! Share this article if you found it useful.
Please do Clap 👏 to show some love :)

Let’s become friends on LinkedIn, GitHub, Facebook, Twitter.

--

--

Hitanshu Dhawan
AndroIDIOTS

Senior Software Engineer @ Urban Company | Google Certified Android Developer