How to Integrate HUAWEI ML Kit’s Hand Keypoint Detection Capability

Irene
Huawei Developers
Published in
4 min readOct 19, 2020

Introduction

In the previous post, we looked at how to use HUAWEI ML Kit’s skeleton detection capability to detect points such as the head, neck, shoulders, knees and ankles. But as well as skeleton detection, ML Kit also provides a hand keypoint detection capability, which can locate 21 hand keypoints, such as fingertips, joints, and wrists.

Application Scenarios

Hand keypoint detection is useful in a huge range of situations. For example, short video apps can generate some cute and funny special effects based on hand keypoints, to add more fun to short videos.

Or, if smart home devices are integrated with hand keypoint detection, users could control them from a remote distance using customized gestures, so they could do things like activate a robot vacuum cleaner while they’re out.

Hand Keypoint Detection Development

Now, we’re going to see how to quickly integrate ML Kit’s hand keypoint detection feature. Let’s take video stream detection as an example.

1. Preparations

You can find detailed information about the preparations you need to make on the HUAWEI Developers-Development Process.

Here, we’ll just look at the most important procedures.

1.1 Configure the Maven Repository Address in the Project-Level build.gradle File

buildscript   {repositories {...maven {url   'https://developer.huawei.com/repo/'}}}dependencies {...classpath   'com.huawei.agconnect:agcp:1.3.1.300'}allprojects   {repositories {...maven {url   'https://developer.huawei.com/repo/'}}}

1.2 Add SDK Dependencies to the App-Level build.gradle File

dependencies{//   Import the base SDK.implementation   'com.huawei.hms:ml-computer-vision-handkeypoint:2.0.2.300'//   Import the hand keypoint detection model package.implementation   'com.huawei.hms:ml-computer-vision-handkeypoint-model:2.0.2.300'}

1.3 Add Configurations to the File Header

apply plugin: 'com.android.application'apply plugin: 'com.huawei.agconnect'

1.4 Add these Statements to the AndroidManifest.xml File so the Machine Learning Model can Automatically Update

<meta-dataandroid:name="com.huawei.hms.ml.DEPENDENCY"android:value= "handkeypoint"/>

1.5 Apply for Camera Permission and Local File Reading Permission

<!--Camera permission--><uses-permission android:name="android.permission.CAMERA" /><!--Read permission--><uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />

2. Code Development

2.1 Create a Hand Keypoint Analyzer

MLHandKeypointAnalyzerSetting setting = new MLHandKeypointAnalyzerSetting.Factory()// MLHandKeypointAnalyzerSetting.TYPE_ALL   indicates that all results are returned.// MLHandKeypointAnalyzerSetting.TYPE_KEYPOINT_ONLY   indicates that only hand keypoint information is returned.// MLHandKeypointAnalyzerSetting.TYPE_RECT_ONLY   indicates that only palm information is returned.setSceneType(MLHandKeypointAnalyzerSetting.TYPE_ALL)// Set the maximum number of hand regions   that can be detected within an image. A maximum of 10 hand regions can be   detected by default.setMaxHandResults(1)create();MLHandKeypointAnalyzer analyzer = MLHandKeypointAnalyzerFactory.getInstance().getHandKeypointAnalyzer(setting);

2.2 Create the HandKeypointTransactor Class for Processing Detection Results

This class implements the MLAnalyzer.MLTransactor<T> API and uses the transactResult method in this class to obtain the detection results and implement specific services. In addition to coordinate information for each hand keypoint, the detection results include a confidence value for the palm and each of the keypoints. Palm and hand keypoints which are incorrectly detected can be filtered out based on the confidence values. You can set a threshold based on misrecognition tolerance.

public class HandKeypointTransactor implements MLAnalyzer.MLTransactor<List<MLHandKeypoints>> {@Overridepublic void transactResult(MLAnalyzer.Result<List<MLHandKeypoints>> results) {SparseArray<List<MLHandKeypoints>> analyseList  = result.getAnalyseList();// Determine detection result processing   as required. Note that only the detection results are processed.// Other detection-related APIs provided   by ML Kit cannot be called.}@Overridepublic void destroy() {// Callback method used to release   resources when the detection ends.}}

2.3 Set the Detection Result Processor to Bind the Analyzer to the Result Processor

analyzer.setTransactor(new HandKeypointTransactor());

2.4 Create an Instance of the LensEngine Class

The LensEngine Class is provided by the HMS Core ML SDK to capture dynamic camera streams and pass these streams to the analyzer. The camera display size should be set to a value between 320 x 320 px and 1920 x 1920 px.

LensEngine lensEngine = new LensEngine.Creator(getApplicationContext(), analyzer)setLensType(LensEngine.BACK_LENS)applyDisplayDimension(1280, 720)applyFps(20.0f)enableAutomaticFocus(true)create();

2.5 Call the run Method to Start the Camera and Read Camera Streams for Detection

// Implement other logic of the SurfaceView control by yourself.SurfaceView mSurfaceView = findViewById(R.id.surface_view);try {lensEngine.run(mSurfaceView.getHolder());} catch (IOException e) {// Exception handling logic.}

2.6 Stop the Analyzer to Release Detection Resources Once the Detection is Complete

if (analyzer != null) {
analyzer.stop();
}
if (lensEngine != null) {
lensEngine.release();
}

Demo Effect

And that’s it! We can now see hand keypoints appear when making different gestures. Remember that you can expand this capability if you need to.

Github source code

For more details, you can go to:
l official website:https://developer.huawei.com/consumer/en/hms
l Documentation page:https://developer.huawei.com/consumer/en/doc/development
l Reddit to join our developer discussion: https://www.reddit.com/r/HMSCore/
l GitHub:https://github.com/HMS-Core
l Stack Overflow:https://stackoverflow.com/questions/tagged/huawei-mobile-services

--

--