Object Detection & Tracking with HMS ML Kit

Oğuzhan Demirci
Huawei Developers
Published in
4 min readJun 30, 2020

Before we start, in order to see where we are in our mind map and have a top-down approach to what we are going to analyze in this article, let’s remember a few definitions simply.

What is Artificial Intelligence?

Artificial Intelligence, AI for short is the ability of a machine to perform tasks commonly associated with humans. It is the study of training the computers to do tasks that the human can do better at present.

What is machine learning?

Machine Learning, ML for short is the learning in which a computer can learn by its own. ML isn’t explicitly programmed to do intelligent tasks. On the other hand, AI is the implementation of the job considered to be intelligent. ML is the application of AI to provide a system the ability to learn and improve from experience.

HMS ML Kit and its capabilities

HMS ML Kit provides developers diversified machine learning capabilities to use in their Android apps. The services included are on the table below.

HMS ML Kit Services

The object detection and tracking service can detect and track multiple objects in an image, so they can be located and classified in real time. A maximum of eight objects can be detected and tracked concurrently. The following object categories are supported: household products, fashion goods, food, places, plants, faces, and others.

Use Case of Object Detection

Today we are going to focus on “Object Detection and Tracking” and build a simple Android application on Android Studio which detects objects in static mode. Let’s do it step by step.

  1. If you haven’t registered as a Huawei Developer yet. Here is the link.

2. Create a Project on AppGalleryConnect. You can follow the steps shown here.

3. In HUAWEI Developer AppGallery Connect, go to Develop > Manage APIs. Make sure ML Kit is activated.

4. Integrate ML Kit SDK into your project. Your app level build.gradle will look like this:

and your project level build.gradle is like this:

5. Create the layout first. There will be an imageView on which we show our images and a floating action button to retrieve images from our device. Here is the sample:

6. Now get images with an implicit intent. Receive data as bitmap in onActivityResult. Because we are going to make our analysis on bitmaps.

7. Until this point we received our bitmap. Before we start analyzing we must create our MLObjectAnalyzer first. There are two types of it. TYPE_PICTURE and TYPE_VIDEO. Here we use TYPE_PICTURE. We are creating our MLObjectAnalyzer like in the example below.

8. Here is our analyze function. It simply creates an MLFrame and MLObjectAnalyzer analyzes it asynchronously. If the task is successful we will draw rectangles around objects found and write their types inside drawItems method.

9. Here is our drawItems method. At the beginning we are copying our bitmap to have a mutableBitmap to work upon.

10. Well done. It has finished. Here is our MainActivity.

11. Let’s try it with some photos.

In this article we implemented our Android app which detects objects in static mode using HMS ML Kit. There are many other services ML Kit provides. We will cover them all one by one. If you have any questions please ask through the link below. In the next part we are going to detect objects in a camera stream. See you until then!

--

--