How to Implement a 3D Modeling Kit in your Flutter Projects

Ertug Sagman
Huawei Developers
Published in
6 min readJun 23, 2023
3D Modeling Kit

Introduction

Hi all! Today I will be introducing the Flutter plugin of Huawei’s 3D Modeling Kit. This guide will show you the steps of integration of this service in your Flutter applications.

Before we go deep into the technicals, let’s see what this service is. HMS 3D Modeling Kit Flutter Plugin:

  • Supports various types of objects and materials to meet service needs in different scenarios.
  • Requires only a mobile device (like a smartphone or tablet) with an RGB camera.
  • Generates a 3D object model or PBR texture maps using images that can be easily collected.

So for such an intriguing outcome, we only have such few requirements. For example:

  • We can use both Huawei’s phones and tablets and also non-Huawei devices. For the Huawei devices, EMUI 3.1 or later, and Non-Huawei devices Android 5.0 or later is required.
  • Currently, 3D Modeling Kit offers each developer a limited number of API calls: a maximum of 10,000 calls per day and 200,000 calls per month.

3D Modeling Kit provides two capabilities in its Flutter plugin, material generation, and 3D object reconstruction, to help with creating 3D content more efficiently at a lower cost. In native development, 3D Modeling Kit also has a third capability, motion capture. But sadly, this feature was cut out from the cross-platform support.

  • Material generation: Integrate this capability into your app to enable your users to convert images, with just one click, into physically based rendering (PBR) texture maps (diffuse, normal, specular, and roughness maps). These texture maps, supported by mainstream rendering engines, can bring a lighting and shading effect that resembles that in the real world. This capability can be used to produce 3D content in industries such as gaming, film & TV, and e-commerce. For example, a user can quickly create realistic wooden floors and tables for indoor scenes of a game, with just one click, based on input images of real-world woods with different colors and textures.

Specifications of input images:

API: Asynchronous API

Resolution (Unit: px): 1024 x 1024 to 4096 x 4096 (It is recommended that the width and height of each image be the same.)

Number of Images: 1 to 5

Total File Size: No more than 100 MB

Format: JPG, PNG, BMP

Material Type: Concrete, marble, rock, gravel, brick, gypsum, clay, metal, wood, bark, leather, fabric, paint, plastic, and composite material

Texture Map Type: Diffuse map, Normal map, Roughness map, Specular map

Specifications of the Output Texture Maps:

API: Asynchronous API, Synchronous API

Resolution (Unit: px): 1024 x 1024 (1024 x 1024 ≤ resolution of the input images < 2048 x 2048), 2048 x 2048 (2048 x 2048 ≤ resolution of the input images ≤ 4096 x 4096), (1024x1024 for Synchronous API)

Number: 4

Format: JPG

Texture Map Type: diffuse map, normal map, roughness map, specular map

  • 3D object reconstruction: Integrate this capability into your app for image data collection and upload, model download, as well as 3D object preview, enabling your users to construct 3D object models from images even on mobile phones without hardware such as RGB-D or light detection and ranging (LiDAR) sensors. This capability is widely used in fields where image-based modeling is required. For example, an e-commerce app can use this capability to showcase products in real 3D with 360° rotation, attracting more users to buy the products.

Specifications of Input Images

Resolution (Unit: px): 1280 x 720 to 4096 x 3072 (All images must have the same resolution.)

Number of Images: 20 to 200 (recommended: 50 to 200)

Total File Size: No more than 800 MB

Format: JPG, JPEG

So now that we know what our service provides, we can have a look at how we can build it. Down below there is a development process chart about our service.

Service Development Process

For the configuration of app information, we should complete the steps here:

1 — Registering as a Developer

2 — Making Gradle changes

3 — Adding Obfuscation scripts

4— Implementing Flutter package of the service

After completing these steps, we can begin to implement our service by adding our permissions.

Take note! For both of our capabilities, the same permissions are required. But the process of collecting images for the service is up to the developer and depending on the scenario, there should be extra permissions added here, for example, the camera access permission is required if one is going to use the camera to take pictures and use them.

Material Generation

After we add the permissions, we can begin to develop from our first capability.

1 — Set an access token or use the API key in agconnect-services.json during app initialization for your app authentication.

2 — Create a material generation engine and configurator, and initialize the engine.

3 — Create callbacks to process image uploading events.

4 — Upload the collected images to the cloud.

5 — Query the progress of an on-cloud material generation task.

6 — Create callbacks to process preview events. Then call the preview API to preview the generated texture maps.

7 — Create callbacks to process the download result of generated texture maps.

8 — Download the generated texture maps.

9 — Call the synchronous API to obtain the generated texture maps in real-time.

So they are abilities provided in the Material Generation capability of the 3D Modeling Kit. As you may have noticed, there are some methods with nothing inside them. For example, after you add the callbacks for downloading the material, you have to fill in your part to handle the process. I had to add the methods as raw so that you can complete them with your own scenarios and logic.

Object Reconstruction

Now let’s move on to the Object Reconstruction capability. This capability includes all the same abilities like upload, preview, download, etc. from Material Generation, just with small changes.

1 — Set an access token or use the API key in agconnect-services.json during app initialization for your app authentication.

2 — Obtain a 3D object reconstruction engine instance.

3 — Create upload listener callbacks to process the image upload result.

4 — Use the configurator to initialize the task, create and set the upload listener to the engine and upload collected images.

5 — Query the status of the 3D object reconstruction task.

6 — Create a preview listener callback and call the preview API to preview the generated 3D model.

7 — Create download listener callbacks to process the model file download result.

8 — Pass the download listener to the engine to download the model file.

9 — (Optional) Call deleteTask to delete the 3D object reconstruction task.

10 — (Optional) Call setTaskRestrictStatus to set the restriction status of the 3D object reconstruction task.

11 — (Optional) Call queryTaskRestrictStatus to query the restriction status of the 3D object reconstruction task.

And this concludes the integration of capabilities of Object Reconstruction. Some of the methods like preview are not completely exampled here as your ‘object preview’ methods may differ so I didn’t want to tie this integration to a specific preview logic. If you don’t have any example, the easiest way would be using the HMS Scene Kit for your Android devices and WebGL/OpenGL-based integration for coverage on all devices.

In conclusion…

Thanks for reaching here! We have studied the integration steps for Material Generation and Object Reconstruction ability of the HMS 3D Modeling Kit’s Flutter plugin. If you have any questions or suggestions, please feel free to ask me :)

References

--

--