👨🏼‍💻Cloud Hosting Use Case for Text Classification with HMS ML Kit

Meva Akkaya
Huawei Developers
Published in
4 min readApr 30, 2022
Cloud Hosting

Introduction

Hi, in this article, the use of cloud hosting in your HMS ML kit models and its example usage for text classification are discussed.We will also cover the value of replacing your model files with the model converter and Post-training Quantization.

Previously, we developed the Text Classification integration, which is one of the features of the HMS ML Kit, and a sample project. If you haven’t read it yet, first of all please check the Text Classification with HMS ML Kit Custom Model article:

In this project, we were providing classification over the model that we saved in the assets locally. But here, keeping our data locally is a situation that increases our project size considerably. So what can we do? If you choose to host your model on the cloud, you can reduce the size of your app package.

How can we set up our model in Cloud Hosting?

First of all, we will upload our .mc extension model file that we have created before to cloud hosting on the console side instead of locally.

Console Side

1- Sign in to AppGallery Connect, then go to “My Projects” and choose your project
2- Go to Project settings > Build > ML Kit
3-
Then, click the Custom ML tab
4- Here click + icon
5- You will see the information you need to fill in about your model.

Custom model — model info
  • The model name can contain only uppercase letters, lowercase letters, digits, underscores (_), and hyphens (-).

6- Here you will see several different options. Mindspore, Caffe, TfLite, ONNX.

Custom ML

What’s all that? In fact, they are all frameworks that provide machine learning capabilities. Select the file type suitable for your model file and upload it. We will continue with it since our file here is Mindspore

  • By default, the final files are converted into MindSpore Lite files and uploaded to the cloud.

In other formats (Caffe, TfLite, ONNX): You should convert it to .ms model with the model conversion tool in Cloud Hosting section.
For more you also you can check: MindSpore Converter Tool

Choose your model type than you will see the Post-training Quantization value. This value plays an important role in the optimization of the models.

Post-training Quantization

Post-training Quantization, which helps to improve optimization of algorithms and models for target devices, is a conversion technique that can reduce model size while improving CPU and hardware accelerator latency with little degradation of model accuracy. You can quantify your already trained model using the converter.

Quantization is an optimization strategy that speeds up inference time in deep learning models, converting 32-bit floating-point numbers to the nearest 8-bit fixed-point numbers.

If your model is quantized, you should continue using the Post-training Quantization option and its values. If not, you can directly upload your file and convert it to .ms format.

You can continue with the other steps in the same way.

7- After your model is loaded, you need to submit the information in the next step.

Confrm and Sumbit

8- You will see that your model is not listed, click “on the shelf” and release your model.

On the shelf

That’s all for the console side, now you can download your model via code using your model over the cloud.

Code Side

In order to use your model in the cloud in the application, you must first download your application. We will set a download policy for this as well. We will download the model via the MLModelManager class by creating a remoteModel object.

* set Region supported:REGION_DR_CHINA,REGION_DR_AFILA,REGION_DR_EUROPE,REGION_DR_RUSSIA。

Even if we are loading our model from the cloud in our app, we can also ensure that the model can be downloaded from the local asset file in the previous article, taking into account whether the model can be downloaded or not due to connection problems.

Conclusion

As a result, If you choose to host your model on the cloud, you can reduce the size of your app package. However, before the model is downloaded for the first time, the functions related to the model cannot be used. The choice is yours ✌️

Glad if it helped, thanks for reading!
I’ll be waiting for your comments and claps! 👏

--

--