HUAWEI AR Engine - 2

Face Mesh

Kadirtas
Huawei Developers
11 min readOct 16, 2020

--

Introduction & Description

Hi everyone

In this blog, I will try to explain the Facial Expression Tracking feature of HUAWEI AR Engine as much as I can by developing a demo application. In addition, if you want to learn about the Body Tracking feature offered by the HUAWEI AR Engine and the comparison I made with its competitors, I recommend you to read the 1st article of this series that I wrote before.

This feature of the HUAWEI AR Engine provides meticulous control over the virtual character’s facial expressions by providing the calculated values of the facial poses and the parameter values corresponding to the expressions in real time. It provides this capability in order to track and obtain facial image information, comprehend facial expressions in real time, and convert the facial expressions into various expression parameters, thereby enabling the expressions of virtual characters to be controlled. In addition, AR Engine supports the recognition of 64 types of facial expressions covering eyes, eyebrows, eyeballs, mouth and tongue.

Also the Face Mesh feature of HUAWEI AR Engine, calculates the pose and mesh model data of a face in real time. The mesh model data changes to account for facial movements.

By providing high-precision face mesh modeling and tracking capabilities, HUAWEI AR Engine delivers a highly-realistic mesh model in real time, after obtaining face image information. The mesh model changes its location and shape in accordance with the face, for accurate real time responsivity.

Also, AR Engine provides a mesh with more than 4,000 vertices and 7,000 triangles to precisely outline face contours, and enhance the overall user experience.

Now I will develop a demo application and try to explain this feature and what it provides in more detail.

The figure below shows the general usage process of HUAWEI AR Engine SDK. We will start this process with ARSession, which we will start in Activity’s onResume function.

While developing this application, we will start with the Engine Functionality section that you can see in the figure below. Then we will develop the render manager class, after that we will complete this article by writing the activity, ie UI part.

While developing this demo application, we will use the OpenGL library for rendering, as in my other article. For this, we will create a class called FaceRenderManager that implements OpenGL’s GLSurfaceView.Renderer interface. We will create and render shaders in the onSurfaceCreated method of this interface. First of all, we will start with the Face Geometry drawing part.

1- Face Geometry

In this section, we will draw the Face Geometry features.

a. Create and Attach Shaders

First of all, we define our vertex shader and fragment shader programs that we will use.

Well, we wrote our shader programs. These are our fragment and vertex shader programs to providing the code for certain programmable stages of the face rendering pipeline.

Now it is time to create shader programs. For this, we add the following code. When we call this method by giving the required shader type and shader source code as parameters, we first create an empty shader object, then provide the source code and compile it and get a referenced integer value. Then we will continue with the next steps with this referenced value.

We will now call this method to create both the vertex shader and the fragment shader to create the face geometry. For this, we will pass the FACE_GEOMETRY_VERTEX and FACE_GEOMETRY_FRAGMENT source codes to this method. Then we add the compiled shaders to the program object we created, to be linked later. After that, we link this program object to use it.

Now let’s call the method we wrote and initialize the values we will use. We initialized these values because while drawing the frame (from the onDrawFrame method), we will use these values to draw the points.

Now we will start to create OpenGL ES regarding face geometry, including creating shader programs, using the functions we have written. In the next steps, we will call this function from the onSurfaceCreated function of the GLSurfaceView.Renderer interface to create OpenGL ES when the surface is created so that we can create OpenGL ES.

b. OpenGL Initialization for Face Geometry

First, we create 2 buffer objects. These buffers will hold our vertice information and our Triangle information. Then we will get visuals by updating them.

We bind the first buffer object we created for Vertex attributes to the array buffer. We specify the size of the buffer and do not put any data in it for now. Then we tell OpenGL that with DYNAMIC_DRAW we will frequently update the values in this buffer and therefore do not optimize these values. Finally, we unbind for optimization.

Then, by adding the following code to the above init function, we bind the second buffer objects we created to the vertex array index binding point with GL_ELEMENT_ARRAY_BUFFER.

After adding them, we add the following code to the init function, create a texture object and bind it to the GL_TEXTURE_2D point.

Now that we have binded the texture object, we add the following code to the init () function and call the createProgram () function that we created in the section “a”, attach the shaders and set the texture parameters.

The final version of the init () function:

c. Draw Face Display

In this section, there are some steps for face drawing operations, that is, to update the face geometry data in buffer. First, we will get the face geometry. For this, we will use the ARFaceGeometry class offered by HUAWEI AR Engine. We can obtain this class using ARSession’s getAllTrackables () function. This way we can get the face (s) on the camera. Example: ArSession.getAllTrackables (ARFace.class)

After obtaining the ARFaceGeometry class, we will use this class to get the vertices, texture coordinates, triangle number and triangle indices of the faces seen by the camera. Then, using these data, we will update the data inside the buffer objects we created earlier and specified with “mVerticeId” and “mTriangleId”.

Now let’s create a function called updateFaceGeometryData that takes an object of type ARFaceGeometry as a parameter. And let’s write the codes of the processes mentioned in the next paragraph into this function.

As you know, model, view and projection matrices are required for 3D rendering on the screen. So we need to create the matrix of the face for the model. For this, we will use the ARFace class of HUAWEI AR Engine. With the getPose () function of this class, we will obtain the ARPose type object. We obtain the model view matrix from this object. We obtain the projection matrix from the ARCamera object of the HUAWEI AR Engine and multiply them. In this way, we obtain the matrix to update the model view projection (MVP) data. Now let’s do them with the following function.

As the last step of the face drawing phase, we complete the drawing with the following function. With this function, we will draw the geometric features of the face using the values we have created / defined up to this stage.

Note: These drawing functions will be called for each frame.

Let’s define the main function that will call these functions in order to make it easier to call these functions that we will use for the face drawing process from the GLSurfaceView.Renderer interface’s onDrawFrame function. We need to send ARCamera and ARFace type objects to this function from the function that we will call. We can obtain these objects from HUAWEI AR Engine’s ArSession class.

ARSession.getAllTrackables(ARFace.class) returns Collection<ARFace.class>

ARSession.update returns ARFrame.class

We have completed the section where we will draw faces with this function. Now we have the texture display part.

2. Background Texture Display

In this section, we will draw the texture.

a. Create and Attach Shaders

First of all, we define our vertex shader and fragment shader programs that we will use in the Face Geometry drawing process.

We load and compile the shader programs we wrote above, just as we did in the first section.

Again, as we did in the first part, we create and attach shaders. Then, in order to use the program we created by calling the glUseProgram() function during the drawing, we first link it using the glLinkProgram() function.

b. OpenGL Initialization for Texture Display

After loading the shaders using the functions we created, we add the following function to initialize the values.

Now let’s write the function that we will set the texture parameters. We will initialize the mExternalTextureId value in the init () function, where we will call this function.

Every step up to here has been for initialize. So let’s create the init() function so that we can call these steps from the onSurfaceCreated() function that we override the opengl.GLSurfaceView.Renderer interface.

Let’s initialize the buffers we will use for drawing with the following function. We will use the buffer objects we created in the onDrawFrame() function.

c. Draw Texture Display

Every time a surface changes, we first need to update the projection matrix we will use in the drawing. For this, we write the following function and we will call this function from the onSurfaceChanged method of the GLSurfaceView.Renderer interface where we will detect the change.

Now that we have initialized/updated our projection matrix and all our other data, we can start drawing.

For drawing, we will need to update the coordinates with the change and get the correct coordinates of the map. For this, we will make use of HUAWEI AR Engine’s ARFrame class. With ARFrame, we will first check if the image has changed, then we need to adjust the texture mapping coordinates so that the background image captured by the camera can be correctly displayed. For this, we will use ARFrame.transformDisplayUvCoords function as follows.

Now, by adding this code, let’s complete the processes of the background drawing process by writing our onDrawFrame() function.

Ok, now we have completed the background drawing. The final version of the TextureDisplay class for which we wrote these codes is as follows.

3. Facial Data Rendering Management

Every step we have done until now has been for face drawing processes. This includes updating data, creating buffers, and setting parameters for OpenGL. We have formed the basis for doing all of these until now. In this section, we will manage the rendering processes by feeding the classes and functions we have created with the data we obtain thanks to AR Engine.

For this, we will create a class and enable this class to implement the GLSurfaceView.Renderer interface. In this class, we will override the 3 functions of the interface (onSurfaceCreated, onSurfaceChanged, onDrawFrame). And we will also write setter functions to set the values inside the class from activity.

Let’s start with the onSurfaceCreated function first. This function is the first function called when surface is created. So here we will call the init() functions of these classes to initialize the TextureDisplay and FaceGeometryDisplay classes we wrote before.

Note: We will initialize variables that are members of this class from the activity in the next steps.

Now we need to create a class for the demo to adapt to device rotations. This class will be used as device rotation manager. And it should implement Android’s DisplayListener interface. You can find the explanation of each methods in the code.

When a device is rotated, the viewfinder size and whether the device is rotated should be updated to correctly display the geometric information returned by the AR Engine. Now that we have written our rotation listener class with DisplayRotationManager, we can make updates in our onSurfaceChanged function.

With this function, we update both the texture, the viewport, and the viewfinder size and device rotation status.

After updating the data, the onDrawFrame function will call.

First we need clear the screen to notify the driver not to load pixels of the previous frame.

Also, we should check whether ARSession is null. If it is null then we can’t continue to drawing process.

If ARSession isn’t null we have to check whether the current device is rotated. If the device is rotated, we should update the device window of the current ARSession by using updateArSessionDisplayGeometry of DisplayManager class.

Set the openGL textureId that used to store camera preview streaming data.
After setting the texture Id, for HUAWEI AR Engine to update the camera preview to the texture ID, we need to call the mSession.update(). We will get the new frame as ARFrame by calling the mSession.update () function.

After updating the Frame, we update the texture immediately. For this, we send the ARFrame object we obtained to the onDrawFrame function of the TextureDisplay class that we have written before.

Now is the time to capture the faces on the screen. Since this is a Face Mesh application, as I will talk about later, we will inform the application that the image to be detected is an ARFace during the configuration phase when creating a session with ARSession. That’s why we want ARFace from ARSession.getAllTrackables() function right now.

Then, we get the ARCamera object from HUAWEI AR Engine’s ARFrame object with the function frame.getCamera() to obtain the projection matrix. Then, for the drawing process, we send these ARCamera and ARFace objects to the onDrawFrame() function of the FaceGeometryDisplay class we created earlier.

The final version of the onDrawFrame function we override is as follows.

Let’s define the setter functions as the last operation of this class. And the final version of the FaceRenderManager class will be as follows.

4. Activity

In this section, we will include the render process into activity lifecycle using the FaceRenderManager class we have created.

First, we’ll create a CameraHelper class. This class will provide services related to the camera device, including starting, stopping the camera thread, and also opening and closing the camera.

Now let’s add android.opengl.GLSurfaceView view to layout of the FaceActivity.

And the onCreate method of our activity. You can see that we just created helper classes like DisplayRotationManager and we made some openGL configurations. At the end of the onCreate method we should check whether HUAWEI AR Engine server (com.huawei.arengine.service) is installed on the current device. I will add entire Activity class and you can see the content of arEngineAbilityCheck method.

I created AR Session in the onResume method. This means that the AR Engine‘s process started from onResume method.

In the activity’s onResume() function, we first register our DisplayListener.

After registering the DisplayListener, we create ARSession. By creating an ARSession, we launch the HUAWEI AR Engine here.

Since this is a Face tracking application, we need to set ARFaceTrackingConfig to the configuration as mentioned before. Upon doing this, we inform HUAWEI AR Engine that the object to detect is a face. For this, let’s create the ARConfig object first. To obtain the ARConfig object, we add the following code to the onResume() function.

After making other configurations, we set this configuration object to ARSession.

After doing some error checks after this process, we resume the session with ARSession.resume().

Finally, we set the values with setter functions. And we complete our onResume () function.

Lets take a look at the onResume method.

In general, we have added the HUAWEI AR Engine to our application. After this stage, you can see the final state of the activity below in order not to skip the steps such as session stop and view stop in functions such as onPause.

Let’s test our app now

In this article, we developed an application about Face Tracking with the HUAWEI AR Engine. As you can see, although the article is long, it is really easy to develop augmented reality applications with the HUAWEI AR Engine compared to other Augmented Reality applications. In fact, all we do is update the openGL buffers with the data we have obtained from AR Engine.

I hope it has been a useful post. In this article, which I try to explain by learning as much as I can, if you have any shortcomings you noticed or if you have noticed mistakes, or if you have any advice you can give me, please write as a comment. Thank you for reading.

See you in my next blogs…

--

--