Augmented Reality: Google’s ARCore Sample App Tutorial

P-Bhandari
Coinmonks
5 min readJun 25, 2018

--

After working on ARCore, there was no way that I wasn’t going to write about it. It is definitely, hands down, one of the best augmented reality based libraries out there and I definitely recommend you to give it a try.

In this post, I will be giving an overview of ARCore and explain the functionality of ARCore using the HelloAr example app provided by Google.

Overview

ARCore is Google’s platform for building augmented reality experiences. It is an alternative to the ArKit, developed by apple, for building applications based on augmented reality on android devices. Using different APIs, ArCore enables the developer to sense its environment, understand the world and interact with information.

The set of algorithms, exposed APIs, provided by ARCore are the result of the project Tango, which due to a few reasons were unable to see the light of day.

ARCore uses three key capabilities to integrate virtual content with the real world as seen through your phone’s camera:

  • Motion tracking allows the phone to understand and track its position relative to the world.
  • Environmental understanding allows the phone to detect the size and location of all type of surfaces: horizontal, vertical and angled surfaces like the ground, a coffee table or walls.
  • Light estimation allows the phone to estimate the environment’s current lighting conditions

Hope you get the general idea of ARCore. In order to better understand the platform let dive into the HelloAR sample app. I will be using the unity framework for explaining the app, however the knowledge can be extended to all the other frameworks.

Tutorial

The HelloAR core app utilizes a bunch of classes which are very well documented in this link.

In the beginning the app launches an instance named as Session. GoogleARCore.Session is the class which handles the app to ARCore service communication. Each session is started at the launching of the app and the “SessionStatus” is used to determine the tracking state of the app.

The session will be in the tracking state unless and until it is able to find a plane from the points that it has detected. The planes detected by the session are accessed by

The variable temp will hold all the detected planes that the session was able to find.

Now instead of using the Physics.Raycast, the ray casting approach provided by the Unity framework, googleARCore uses it own raycast implementation which can be accessed by :-

User touch for RayCast

This particular function sends the ray from a point, specified by the parameters x, y, and dumps the number of objects, rigid objects with mesh filters, the ray hits on its path, in the real world scenario, on the out Trackable Hit.

The above API is used when the user gesture selects the location in the real world by tapping on the screen. The ray cast is then sent in the direction of the touch point of the user and only when the raycast gives a detected plane on the hitResult, the virtual object is placed on that plane. The code for same is stated below :-

TrackableHit hit;
TrackableHitFlags raycastFilter= TrackableHitFlags.PlaneWithinPolygon | TrackableHitFlags.FeaturePointWithSurfaceNormal;

if (Frame.Raycast(touch.position.x, touch.position.y, raycastFilter, out hit))
{
// Check if the out hit is a detected plane
if ((hit.Trackable is DetectedPlane))
{
//Add the virtual object
}

}

In order to add the object to the virtual space, you need the position and rotation of the object in the real world. The position can be easily obtained using the point of incidence of the ray onto the detected plane.

In order to set the rotation, you will need to do some tweaking on your own, which would majorly depend on the output desired. However, you can get access to the rotation of the camera, rotation of the incident plane, normal of the plane etc. You can use these rotations as reference and then rotate the object accordingly.

The rotation is provided in quaternion instead of euler angles, though you can convert them into each other using the following APIs.

Now once you have Instantiated/created the virtual object in the AR space, you will need to make sure that even when you move, the object stays in the same position relative to the plane on to which it was created. Also, it is important to maintain the previous data as the understanding of the space evolves. Using anchors, googleArCore is able to tackle the problem effectively. Anchors are used to create a link between the virtual object and the detected plane. There whenever the plane is displayed in the real world space, the object will always come associated with it.

These are the major APIs that are used in the HelloAR sample app provided by googleArCore. In order to better understand each API and class you can read the detailed documentation here. Also, make sure that you have a phone that supports ARCore before you start developing, you can find the list of the supported phones here.

Drawbacks

The only drawback that I faced whilst working with ARCore was the inability to change or tweak the parameters of the algorithms used in the feature tracking and plane detection. The algorithms used by the exposed APIs are not documented anywhere and also the developer is unable to tweak the parameters as to his or her own problem. For example, the feature point detection, surface detection, light estimation are the different functionalities which can be used but cannot be tuned by the developer. However, the execution and working of the APIs are in itself quite powerful and tend to work well in many different scenarios.

In case you have any doubts regarding my post or ARCore in general, please do comment on the section below. I will try to help you as much as I can. Have fun developing :).

Join Coinmonks Telegram Channel and Youtube Channel get daily Crypto News

Also, Read

--

--

P-Bhandari
Coinmonks

Foodie — Product Manager — DIY Maker. I tend to talk a lot on sci-fi, anime and philsophy.