👨🏼‍💻Creating a Motion Capture application using HMS 3D Modelling Kit and Kotlin

Ertug Sagman
Huawei Developers
Published in
5 min readJun 7, 2022
From raw body movement input to 3D character

Introduction

Hi all! Today I will be explaining how to develop a live motion capture Android application! Aim of this project is to convert a live recording of you into a 2D line character. To achieve this, body skeleton joint and quaternion data will be obtained by using Huawei’s 3D Modeling Kit’s feature, Motion Capture. After obtaining this data, we will use an OpenGL scene to draw the character matching our movements. Using the data from Motion Capture, we are able to match these outputs below:

  • Frame rate: greater than 30 fps on a phone with a mid-range or high-end chip.
  • Simultaneously outputted quaternions and 3D coordinates of 24 key skeleton points (as shown in the following figure) and the translation parameter of the root joint.
Skeleton point outputs of Motion Capture service

Here is a structure of the joints and bones we can capture with Motion Capture. The quaternions, 3D coordinates, and translation parameter of the root joint (manually specified as point 0) are located in the right-handed coordinate system. The quaternions are relative to the root joint. The 3D coordinates are the relative coordinates to the root joint. The translation parameter of the root joint is the absolute coordinates of the root joint in this coordinate system. Careful! The current version cannot detect strenuous actions or high-speed movements. Especially, the angle between the leg and the upper body differs greatly from such an angle in the standing state. The actions or movements include the jump kick in martial arts and difficult yoga poses.

Now that we know what can we expect from the service, let’s get deeper into technical details.

First of all, after we create our project in Android Studio and also the AppGallery Connect, we need to do some configuration in the build.gradle files. As there are three versions of build.gradle that varies our integration, I have added a link here to check which way you will need to follow. Once we have our Gradle work completed, we can begin to actual development. Let’s start by adding our permissions and requesting them in our MainActivity.

I have used a simple library for decreasing and easing the permission requests.

implementation 'com.vmadalin:easypermissions-ktx:1.0.0'

This code block is for starting our RecordAcvitity, where we are going to open camera and preview the OpenGL scene for our body movements. I added a simple button and setOnClickListener in MainActivity to go to RecordActivity.

After moving to the RecordActivity, there are few steps we need to complete in order to achieve our goal here:

1 — We need to prepare our camera source so that we can transfer our live recording data to Motion Capture engine.

2 — Once we are able to capture our images, we are going to define a base processor so that we can apply necessary settings to our data and process into our OpenGL scene.

3 — In order to draw onto OpenGL scene to display our data, we are going to prepare a surface view.

  • In the meantime, we are going to define some utilities to help us in our whole process and keep the code a bit cleaner.

Don’t let the steps intimidate you, they are mostly boilerplate codes and you can freely import them as they are into your projects. For example for the first step, we will implement a camera source and camera source preview to be able to capture our image. As this is a bit huge class than others, I will not include it to keep the article clean, you can find the necessary classes with the name ‘CameraSource’ and ‘CameraSourcePreview’ from the repository link below. There is also an extra class used by these two classes, ‘GraphicOverlay’. Don’t forget to grabthis class as well.

After you have successfully implemented CameraSource and CameraSourcePreview classes, you should go next with the HmsMotionProcessorBase implementations. Base class is as follows:

Then we create a simple interface named HmsMotionImageProcessor to be used afterwards.

Lastly, we should prepare a skeleton processor in order to achieve ‘meaningful data’ usable by OpenGL library.

In this LocalSkeletonProcessor, we implement our interface, HmsMotionImageProcessor, and also use a utility class named FilterUtils for the purpose of helping adjusting our joint, quaternion and bone data depending on their own coordinates.

In our RecordActivity, we must create our cameraSource with BoneGLSurfaceView class, which we also are going to prepare. This class implements GLSurfaceView.Renderer, and this interface provides us the onSurfaceCreated, onSurfaceChanged and onDrawFrame methods. We are going to define our shader in onSurfaceCreated, put our settings in onSurfaceChanged, and draw our 2D skeletons from the data we obtain in onDrawFrame. In order to create this, you should follow as:

We need to be careful about two variables in the onSurfaceCreated, vertexShader and fragmentShader, as these variables carry our settings in string format, it is easy to mess up and not notice the fault. We also use one of our utility classes, ShaderUtils, in this class so we should also define it now.

Final step! We have configured everything and ready to build up our service in our RecordActivity. Feel free to have a look at how I structured the activity to work.

Also let’s not forget the XML class for our activity, which we need to put our custom layouts as well :)

Conclusion

Everything is completed and you should be able to:

  • start up your devices camera,
  • obtain instant data from HMS Motion Capture feature,
  • process this data into a OpenGL scene

Hope it was a joyful adventure for you to follow down till here. If you have any questions, please feel free to contact me, I will be glad to provide answers :) Thanks!

References

--

--