Comparison Between Huawei ML Kit Text Recognition and Firebase ML Kit Text Recognition

Efnan Akkuş
Huawei Developers
Published in
5 min readSep 7, 2020

In this article, we will compare Huawei ML Kit Text Recognition and Firebase ML Kit Text Recognition usages and also we will create sample Android applications to understand how they work. Lets get start it.

Huawei ML Kit Text Recognition

About The Service

HUAWEI ML Kit allows your apps to easily leverage Huawei’s long-term proven expertise in machine learning to support diverse artificial intelligence (AI) applications throughout a wide range of industries. Thanks to Huawei’s technology accumulation, ML Kit provides diversified leading machine learning capabilities that are easy to use, helping you develop various AI apps.

Text Recognition

The text recognition service can extracts text from images of receipts, business cards, and documents. This service is widely used in office, education, transit, and other apps. For example, you can use this service in a translation app to extract text in a photo and translate the text, improving user experience.

This service can run on the cloud or device, but the supported languages differ in the two scenarios. On-device APIs can recognize text in Simplified Chinese, Japanese, Korean, and Latin-based languages (refer to Latin Script Supported by On-device Text Recognition). When running on the cloud, the service can recognize text in languages such as Simplified Chinese, English, Spanish, Portuguese, Italian, German, French, Russian, Japanese, Korean, Polish, Finnish, Norwegian, Swedish, Danish, Turkish, Thai, Arabic, Hindi, and Indonesian.

Configure your project on AppGallery Connect

Registering a Huawei ID

You need to register a Huawei ID to use the plugin. If you don’t have one, follow the instructions here.

Preparations for Integrating HUAWEI HMS Core

First of all, you need to integrate Huawei Mobile Services with your application. I will not get into details about how to integrate your application but you can use this tutorial as step by step guide.

1. Integrating the Text Recognition SDK

You need to integrate the base SDK and then one or more required language model packages in your app-level build.gradle.

2. Automatically Updating the Machine Learning Model

To use the on-device text recognition service, add the following statements to the AndroidManifest.xml file.

3. There will be an ImageView, a TextView and two Button in our RelativeLayout

4. Text Recognition from Images on the Device

Take a photo with a camera app

The Android way of delegating actions to other applications is to invoke an Intent that describes what you want done. This process involves three pieces: The Intent itself, a call to start the external Activity, and some code to handle the image data when focus returns to your activity.We will see the result into onActivityResult.

5. Create the text analyzer MLTextAnalyzer to recognize text in images. You can set MLLocalTextSetting to specify languages that can be recognized. If you do not set the languages, only Latin-based languages can be recognized by default.

6. Pass the MLFrame object to the asyncAnalyseFrame method for text recognition.

7. After the recognition is complete, stop the analyzer to release recognition resources.

8. You can see all the code in Main Activity below.

9. Here’s the result.

Firebase ML Kit Text Recognition

You can use ML Kit to recognize text in images. ML Kit has both a general-purpose API suitable for recognizing text in images, such as the text of a street sign, and an API optimized for recognizing the text of documents. The general-purpose API has both on-device and cloud-based models.

Before you begin

1. If you haven’t already, add Firebase to your Android project.

2. In your project-level build.gradle file, make sure to include Google's Maven repository in both your buildscript and allprojects sections.

3. Add the dependencies for the ML Kit Android libraries to your module (app-level) Gradle file (usually app/build.gradle):

4. add the following declaration to your app’s AndroidManifest.xml file

5. There will be an ImageView, a TextView and two Button in our RelativeLayout

6. Take a photo with a camera app

Here’s a function that invokes an intent to capture a photo.

7. The Android Camera application encodes the photo in the return Intent delivered to onActivityResult() as a small Bitmap in the extras, under the key "data". The following code retrieves this image and displays it in an ImageView.

8. To create a FirebaseVisionImage object from a Bitmap object.

9. To display text from the image

10. You can see all the code in Main Activity below.

11. Here’s the result.

--

--