Android: Face Detection from ML Kit
ML Kit is a mobile SDK that brings Google’s machine learning expertise to Android and iOS apps in a powerful yet easy-to-use package
In this article we will learn how to use Face Detection from ML Kit on Android
ML Kit has another capabilities, if you want to know the details, visit this link https://firebase.google.com/docs/ml-kit/
Capabilities
- Recognize and locate facial features : Get the coordinates of the eyes, ears, cheeks, nose, and mouth of every face detected.
- Recognize facial expressions : Determine whether a person is smiling or has their eyes closed.
- Track faces across video frames : Get an identifier for each individual person’s face that is detected
- Process video frames in real time : Face detection is performed on the device, and is fast enough to be used in real-time applications, such as video manipulation.
What we will learn?
We will learn how to use capability #1 and #2, here we are….
Getting started
- Make sure you have integrate Firebase to your project. If it not done yet, please follow Firebase Setup
- Add dependencies in your app level
build.gradle
3. Add the following declaration to AndroidManifest.xml
By configure your app with this, you enable your app to automatically download the ML model to the device after your app is installed from the Play Store.
You can ignore this, but the model will be downloaded the first time you run the detector. Requests you make before the download has completed will produce no results.
Go to the code
- Before detect some faces, you can change any of the face detector’s default settings, specify those settings with a
FirebaseVisionFaceDetectorOptions
object.
2. Set the image, create a FirebaseVisionImage
object from either a Bitmap
, media.Image
, ByteBuffer
, byte array, or a file on the device. Here’s how to set FirebaseVisionImage
from bitmap.
For other way to set image you can go to this link
3. Get an instance of FirebaseVisionFaceDetector
and pass the image to the detector
4. Process the result. If success, the you can process the result to get the detail of the image (the result will be a list, because it will detect all faces in image)
Look at all val
s inside the code, there are many informations that we can get from the Face Detector. Those are only example, you can get more than that (smile probability, right-left eye opened probability, etc.). Here’s the example result
4. Finish. Now we are finish, you can use those information to complete your app, I also have created example, if you have time you can look to my github
thanks for reading, clap and share if you like this….