Load Your Custom Model into Projects | Cloud Hosting & Local Integration

Mustafa Sürücü
Huawei Developers
Published in
5 min readJan 26, 2021

Hi everyone,

Custom model journey with HMS ML Kit is ongoing with this post. In my last article, we examined the relationship between MindSpore and HMS ML kit, have discovered what MindSpore is actually doing and finally, we produced our own model by using a sample dataset via HMS Toolkit. If you have not read the previous article yet, I strongly suggest you to start with the below link.

With this article, we will handle other features and topics that offered within the scope of HMS custom model. Let’s assume that you have just trained your model with a way which is explained before and obtained your .ms (MindSpore) model or as a second option there is a model that you are actively using, which is available in another format like TFLITE or CAFFE.

The Cloud Hosting feature of HMS ML Kit enables model conversion through MindSpore. It supports conversion of multiple types of models. When you upload your model to cloud hosting, MindSpore will convert it to .ms format and store on the cloud. MindSpore, TensorFlow Lite, Caffe, and ONNX are the input formats that are currently supported.

Cloud hosting can save from the size of your application, you can download the model package within the code and use its functions. To upload your model file please go to Project Settings > Build > ML Kit > Custom ML after clicking the project that you created.

1. Please set the data storage location from the Service Access Site and click on “Add” icon to upload a model.

2. Please enter your Model Name and Requirement Desctiption and click on “Next Step”.

3. Select the file type and upload your model.

4. Please click on “Submit” after file is uploaded. To release a model click on “On the Shelf” button.

5. After your model is released, you can download your model within the code.

To download your model from the cloud, you should set a download policy first and create a remoteModel object from MLCustomRemoteModel class. After that, MLLocalModelManager class will be used to download the model with the help of a download listener.

The listener will return downloaded size and the total size of the model file for you. As described above, you can store your model in the cloud and download it to your project. However, it is also possible to integrate your model directly into your project instead of storing and downloading from the cloud.

For local integration, we can save our model in the Assets directory of the project or we can choose a custom directory.

Please add following block to project-level build.gradle file to guarantee that Gradle does not compress the model file.

Currently, we know how we can load the model file from cloud or local integration. I will load the model from cloud in our demo but, we will check whether the model is downloaded or not to ensure the existence. If it is not downloaded, we will use local integration.

assets folder

We have progressed through MindSpore Lite models so far, so if you have a model in the format like tflite or onnx, you should convert it to .ms model via model conversion tool in cloud hosting section. The converted model file can be loaded as described in the previous steps.

When we select a file type other than MindSpore in Cloud Hosting section, we encounter the concept of “Post Training Quantizaton”. The performance of a model may vary according to the hardware restrictions of the target devices. Post-training quantization is a conversion technique that can reduce model size while also improving CPU and hardware accelerator latency, with little degradation in model accuracy. It can improve in the optimization of the algorithms and models for the target devices.

If your model is not a quantized one, you can directly upload your model file to convert it to .ms format and store on the cloud. However, if your model is a quantized one, you should select post-training quantization option and fill the parameter values that used in the process of quantization. After entering the values, you can upload and use your model as usual.

These parameters are the values ​​that used when quantifying a model. You can examine the explanations in the MindSpore conversion tool from the image below.

If you want to remove a released model, please click on “Unshelf” button.

Note: There is a restriction for the models that can be converted through cloud hosting, you can find the list of models below.

We have handled the subjects of model conversion via cloud hosting and loading local and remote models to our projects in this post. I will explain how to use Model Inference function in next article and we will see how we can send a request to our model and use the results that returned by the model.

Thank you for reading !

References

--

--