Huawei HiAnimals AR project development(Part III)

Melikeeroglu
Huawei Developers
Published in
8 min readSep 30, 2020

Hi eveybody, this is the third and last part of HiAnimals application. In the first part we prepared AndroidManifest, gradle files and integrate Huawei Kits. In the second part we created common classes. In this part we will build rendering classes. Let’s start.

You can reach first part from here.

You can reach second part from here.

1-Object Display

This class is used to draw a virtual object. In here deer, tiger, duck and dog are used. You can use any object that you want. You can find free animals or objects from internet but you should use .obj and .png files to render properly.

  • setSize — If the surface size is changed, update the changed size of the record synchronously. This method is called when surface changed.
  • init — Create a shader program to read the data of the virtual object. This method is called when surface created. In this method we will prepare object for rendering using createOnGlThread method.
createOnGlThread(context, "deer.obj", "Diffuse.png");

Above code shows how we use createOnGlThread method. We should pass .obj file and .png file of the object to be drawn on the screen.

  • onDrawFrame — Draw a virtual object at a specific location on a specified plane. This method is called in WorldRender Manager#onDrawFrame. In this method we will set light direction and color property of the object.
  • hitTest — Check whether the virtual object is clicked and return the click result for determining whether the input virtual object is clicked.
  • createOnGlThread — We will open and read .obj file in the asset folder of object to be displayed via InputStream and ObjReader. Then we will convert object to renderable object which contains triangles. After that we will use below methods.
  • getFaceVertexIndices — Returns the vertex indices from the faces of the given ReadableObj as direct IntBuffer.
  • getVertices — Returns all vertices of the given ReadableObj as a direct FloatBuffer.
  • getTexCoords — Returns all texture coordinates of the given ReadableObj as an array.
  • getNormals — Returns all normals of the given ReadableObj as an array.

Now we have vertex, vertices, texture coordinates and normal of the object. We will load object to buffers and we will bind these buffers. Then we can load shaders using loadGLShader method of ShaderUtil class. Now, we can create an empty program object using glCreateProgram method of GLES20 then we can attach shaders and installs the program object. After setting light, position and color properties of object we will be able to see our object on the screen.

2- WorldRenderManager

This class provides rendering management related to virtual object rendering management.

  • setArSession — Set ARSession, which will update and obtain the latest data in OnDrawFrame.
  • setDisplayRotationManager — Set the DisplayRotationManage object, which will be used in onSurfaceChanged and onDrawFrame.

This class implements GLSurfaceView.Renderer interface. This is a generic renderer interface. The renderer is responsible for making OpenGL calls to render a frame. It has 3 method to implement.

  • onSurfaceCreated — Called when the surface is created or recreated.

Therefore we will call init methods of TextureDisplay and ObjectDisplay classes.

  • onSurfaceChanged — Called when the surface changed size. Therefore we will call onSurfaceChanged method of TextureDisplay , update viewport rotation and and set size of the object.
  • onDrawFrame — Called to draw the current frame. Firstly check whether device is rotated or not then update the device window of the current AR Session if it rotated. Now, we will create ARFrame and ARCamera instances using AR Session. Then we will call onDrawFrame method of TextureDisplay class.

Until now, we will be able to display object. Then we will try to move object on the surface via Gesture Detector of Motion Event. Accordingly we will create GestureEvent class to manage gesture events then we will call handleGestureEvent and drawAllObjects method respectively. Also demo app has capture photo feature. To do this, we will create class level variable which name capturePicture and we will set it false. In this method we will check this variable is true or not. If it is true, it means we can capture photo so we will call SavePicture method.

  • handleGestureEvent — This method calls related gesture event method according to event type.

If event type is DOWN, it will decide the any object is selected or not according to point user’s clicked.

If event type is SINGLETAPUP, it will create new virtual object on the point user’s tap.

If the event type is SCROLL, move the selected object to the selected point according to result of hit test.

  • drawAllObjects — This method calls onDrawFrame method of ObjectDisplay class to draw a virtual object.
  • takePhoto — Here just a set capturePicture to true.
  • savePicture — Read the pixels from the current GL frame. Then create a file in the Pictures/HiAnimals. Convert the pixel data from RGBA to what Android wants, ARGB. Create a bitmap. Finally write it to disk.
  • calculateDistanceToPlane — Calculate the distance between a point in a space and a plane. This method is used to calculate the distance between a camera in a space and a specified plane.
  • hitTest4Result — After determining whether the hit point is within the plane polygon, selects points on the plane and returns it.

3-VirtualObject

This class provides attributes of the virtual object and necessary methods

related to virtual object rendering.

  • setIsSelected — Set the selection status of the current object by passing true or false, where true indicates that the object is selected, and false indicates not.
  • getModelAnchorMatrix — Obtain the anchor matrix data of the current virtual object.
  • setColor — Set the color of the current virtual object.
  • getColor — Return color of the virtual object, returned in an array with a length of 4.
  • getAnchor — Obtain the anchor information of a virtual object corresponding to the class.
  • setAnchor — Update the anchor information in the virtual object corresponding to the class.

4- GestureEvent

Gesture event management class for storing and creating gestures. We will define 4 constant for different gestures.

Define the constant 0, indicating an unknown gesture type.

public static final int GESTURE_EVENT_TYPE_UNKNOW = 0

Define the constant 1, indicating that the gesture type is DOWN.

public static final int GESTURE_EVENT_TYPE_DOWN = 1;

Define the constant 2, indicating that the gesture type is SINGLETAPUP.

public static final int GESTURE_EVENT_TYPE_SINGLETAPUP = 2;

Define the constant 3, indicating that the gesture type is SCROLL.

public static final int GESTURE_EVENT_TYPE_SCROLL = 3;

  • createDownEvent — Create a gesture type: DOWN.
  • createSingleTapUpEvent — Create a gesture type: SINGLETAPUP.
  • createScrollEvent — Create a gesture type: SCROLL.

Thus we complete GestureEvent class. We created 4 different event type and we create their methods.

Now we completed almost all AR works. We can start building our activities. First activity which is AnimalListActivity will be display different animals and information about them.

5- AnimalListActivity

Here we will list our animals and information for them. Also we need get camera permission for the display animal so we get permission from user. After user get permission then can select an animal and we will pass selected animal to AnimalActivity.

6- AnimalActivity

In the layout file, we will use GLSurfaceView in the relative layout. User should move phone slowly during finding plane so we need a text view to inform user. At last we will use two image view to take picture and get picture. After adding all these elements into layout file, we can define them into activity.

In the onCreate, we should create an instance of DisplayRotationManager to update screen according to device rotation. Then we will call initGestureDetector to detect gesture events. Then we get intent which is passed by AnimalListActivity and we will create WorldRenderManager instance using this intent. We will set renderer of surface view as instance of WorldRenderManager. Also we should set click listener to image view to complete take photo task.

  • onGestureEvent — add coming event into the queue.
  • initGestureDetector — create new GestureDetector and implements it’s methods. In these methods, we will call onGestureEvent method to add new coming event into the queue.
  • onResume — Here we should check Ar Engine apk is ready to use. If it is ready, create ARSession and set it to the instance of WorldRenderManager. We will do this work in try catch block and we will catch if any problem occur. During any problem we should stop ArSession using stop function. If there is no problem, we will call resume function of ArSession then register display listener to listen device rotation and call onResume function of surface view.
  • onPause — We will unregister to display listener. Also we should stop surfaceView and ArSession.
  • onDestroy — We will stop and remove ArSession.

7- Cloud DB

We should apply for the Cloud Db. After get authorization, we can start to build our database.

1. Add DB Zone

Create a cloud-side data Cloud DB zone on the AppGallery Connect console.

2. Add Object Type

In demo project two different object type which are user and photo is used.

After adding object types, we should add same model into project. We can export models from console easily and put them into project.

Then we should add Cloud DB library to the build.gradle file under the project’s app directory and the compatibility mode of the Java source code should be set as 1.8. Now create a class named CloudDBZoneWrapper for all database operations and all the upsert, query operations in this class, call these methods in the activity/fragment to be used.

Firstly, Cloud DB objects should created to be used in this class. After then create an instance from the AGConnectCloudDB object in a constructor method. Then, initAGConnectCloudDB method must be created for calling on the app landing page. This method must be run before the app is opened, before starting all DB operations. Next, creating Object Type, and open/close DBZone operations should be coding. These methods will be used before upsert and query operations for open DBZone and create Object Types. A few callbacks have to be added to get the results of these actions to the activities and fragments where the actions are operated. In this way, all DB operations will be gathered in a single class, without the activity being too tired and without crowd of code. First, it should be checked whether DBZone is created or not. If DBZone has an error, will not upsert data. Then, upsert with CloudDBZoneTask object.

As shown in the below, in the LoginActivity we insert user to CloudDB after successful authorization of user’s.

User u = new User(signInResult.getUser().getUid(), accessToken, signInResult.getUser().getDisplayName());mCloudDBZoneWrapper.insertUser(u);

Also photo captured by user will store on Cloud DB. Therefore we will make similar things in AnimalActivity after photo captured.

This code helps us add photos to Cloud Db. As you know we will make all database operations in CloudDbZoneWrapper so we will add above code in this class. Then we will call this method in the SavePicture method like shown.

Photo p = new Photo(pref.getString(“token”, null), byteArray, date);
mCloudDBZoneWrapper.insertPhoto(p);

Thus, we make insertions to 2 different object types. Now we will get last captured photo from Cloud Db. To do that we will add getAllPhotos method in the CloudDBZoneWrapper like that.

This method helps us to retrieve last inserted photo from CloudDb then we will call it when user click get picture image view on AnimalActivity like that

mCloudDBZoneWrapper.getAllPhotos(this);

Then onAddOrQueryPhoto method will be called when photo is retrieved.

Now we have photo as byte array. We will convert it to bitmap and store in a temporary file then we will pass path via intent to next activity which is PhotoActivity.

8- PhotoActivity

Here we will show last captured photo to user. Firstly we get path of the photo which is sent by AnimalActivity then we convert to bitmap and set it to image view.

You can reach source code from here.

You can download app from here.

https://appgallery.huawei.com/#/app/C102935435

--

--