Enrich Your App’s content with HMS ML Kit Image Classification Service

Yunus Emre Fırat
Huawei Developers
Published in
4 min readDec 25, 2020

In this article, you will have a chance to get information about HMS ML Kit Image Classification feature and its implementation. Also I try to explain why I came up with the idea of using ML Kit Image Classification feature to solve what in my demo application.

First, let’s take a look at the HMS ML Kit’s image-related Image Classification service;

  • The image classification service classifies elements in images into intuitive categories.
  • It supports both device-based and cloud-based recognition modes.
  • Device-based recognition refers to the process of running the detection algorithm model on the device. It supports more than 400 common image categories.
  • Cloud-based recognition refers to the process of calling an API on the cloud where the detection algorithm model is running. It supports 12,000 image categories.
  • In addition, this service allows you to customize image classification models.

With using image classification service you can enrich your application’s scenario and make it smarter. Let’s say you have a shopping application and you want to provide a feature to your users to find a product they are looking for, or a category of that product via taking a picture. Image classification service can help you to query those related products with its precise category results about the image. Also if you have a photo gallery application, integrating it with the image classification service provides for a new layer of intelligence, as images can be automatically sorted by category.

Image Classification is just one of the services of HMS ML Kit. If you want to look many other features of ML Kit and get further information about it, you can refer to below documentation.

OK, now let’s talk about the integrating the Image Classification SDK and while doing that, see my usage of image classification service within application scenario.

Before start the implementation we need to create an app in AppGallery Connect. Because we will need to add agconnect-services.json file under “app” folder of our project in order to access image classification service.

Then we need to add dependencies to the app level gradle file(implementation ‘com.huawei.hms:ml-computer-vision-classification:2.0.1.300’). Also add agconnect plugin(apply plugin: ‘com.huawei.agconnect’).

Add maven repo url and agconnect dependency to the project level gradle file as last configuration step of the preparations for integrating Huawei Hms Core and adding image classification dependecy.

We are good to go. Let’s do some serious business!

When I was developing the demo application, the main purpose was to create an entertainment/gaming app demo. Users can create a challenges via clues for competitor to solve. With creating a specific clue about the specific object, user will create a pattern to redirect competitor one specific object to another to complete the challenge.

ML Kit Image Classification feature is used for that purpose to get reference category labels while creating a clue about specific object after the object image is recognized.

First Part: Providing create a clue feature within the Static Image Detection function of the SDK

  • Getting classification labels about scanned objects via ML Kit’s ImageClassificationAnalyzer to describe object.
  • These labels are saved in order to use to determine object later.
  1. In order to process the image, we need to create an image classification analyzer. We can create the analyzer using the MLLocalClassificationAnalyzerSetting(for on-device recognition) or MLRemoteClassificationAnalyzerSetting(for on-cloud recognition) classes.

2. Then we need to create an MLFrame object using android.graphics.Bitmap.

3. Finally call the asyncAnalyseFrame method to do the magic and returns us the classification results as a list.

Implementation of the static image detection for on-device recognition.

Second Part: Providing a clue detection feature within the Camera Stream Detection function of the SDK.

Compared with on-device recognition, on-cloud recognition is more accurate but less efficient and supports only static image recognition but does not support recognition from camera streams. Detecting clue feature is designed with the live camera stream action, that’s why I needed to use on-device recognition in this part of the application.

  • LensEngine and camera surface view used in order to try match predefined labels for object via scanning with live camera action
  • If labels are match with the predefined ones than clue accepted as resolved
  1. In order to process the image, we need to create an image classification analyzer, which can be created only on the device.
  2. Create the ObjectAnalyzerTransactor class for processing detection results. This class implements the MLAnalyzer.MLTransactor<T> API and uses the transactResult method in this API to obtain the detection results and implement specific services.
  3. Create an instance of the LensEngine class provided by the HMS Core ML SDK to capture dynamic camera streams and pass the streams to the analyzer which is created before.
  4. Create and CameraSourcePreview ViewGroup class and call the run method of the LensEngine to start the camera and read camera streams for detection. After the detection is complete, stop the analyzer to release detection resources.
Activity opened when detecting clue with the predefined category labels via camera stream
Layout xml file of the LiveImageDetectionActivity
Class for processing detection results

In summary, I wanted to develop an application kind of simplified treasure hunting game. When I tried to achieve create a reference data for the specific object and detect the object later with this reference data, I came across with the HMS ML Kit Image Classification service. Up to a point it got my need enough for that purpose.

In order to get further information about the Image Classification, you can refer to below link.

--

--