The Power of ARCore

Yuliya Kaleda
5 min readApr 29, 2018

--

The first attempt to bring AR experience to Android was Tango technology, which is very precise, accurate and super performant. Tango is good at identifying large and irregular shapes. But all this power does not come for free. To support Tango technology devices require special hardware (e.g. wide-angle camera, depth sensing camera, accurate sensor timestamping). Because of hardware dependency and slow adoption in the community, Tango has announced to be deprecated.

As Tango’s successor Google has released ARCore, which does not rely on extra hardware to provide performant AR experience on Android. It is built around 3 main principles:

  • motion tracking
  • environmental understanding
  • light estimation

ARCore is very powerful in terms of geometry detection, but unfortunately it does not do view rendering. Thus, to build a fully immersive AR experience we need to choose technology that can handle object rendering well, e.g. OpenGL or its wrappers.

ARCore for geometry detection and OpenGL for view rendering to build a fully immersive AR experience

ARCore foundation block is a session, which can be viewed as a global repository that stores information about AR system state. There are 3 main config options that can be tweaked to tailor AR experience to specific use cases:

  • plane detection mode
  • light estimation mode
  • update mode

The official documentation does a great job describing all the config options.

In most cases we do not need all the information collected in the session, but just the most recent data about the system state. This is where frames come into play. They capture AR state and changes within a timespan. To retrieve a frame from a session we just need to invoke a method update() on a session instance:

@Override
public void onDrawFrame(GL10 gl) {
if (session == null) {
return;
}

Frame frame = session.update();
}

From a frame we can get access to the camera, average light intensity and a list of HitResults:

private final float[] viewMatrix = new float[16];
private final float[] projectionMatrix = new float[16];
@Override
public void onDrawFrame(GL10 gl) {
...
Frame frame = session.update();
Camera camera = frame.getCamera();
//Get projection matrix
camera.getProjectionMatrix(projectionMatrix, 0, FP_NEAR, FP_FAR);
//Get camera matrix
camera.getViewMatrix(viewMatrix, 0);
//Compute lighting from average intensity of an image
final float lightIntensity =
frame.getLightEstimate().getPixelIntensity();

List<HitResult> hitResults = frame.hitTest(position.x,
position.y);
}

A camera instance provides ViewMatrix and ProjectMatrix, which are used to render virtual content on top of a camera image. Utilizing average light intensity we can lighten/darken our objects to make them look realistic. A list of HitResults exposes a collection of collision points. When a user clicks on the screen, ARCore creates a ray to detect a potential intersection with surfaces. A HitResult can be either a feature point or a plane:

Feature points are on the left, a plane is on the right

From a performance standpoint, planes are more stable and less frequently affected by ARCore tracking adjustments. So, it is a good rule of thumb to try to retrieve a plane from a list of HitResults. If none is detected, we should work with the closest feature point. In the below snippet if a plane is not found, we get the first element from the list, which is guaranteed to be the closest feature point because the list is sorted by depth:

@Override
public void onDrawFrame(GL10 gl) {

List<HitResult> hitResults = frame.hitTest(position.x,
position.y);

if (hitResults.size() > 0) {
HitResult hit = getClosestHit(hitResults);
}
}
private HitResult getClosestHit(List<HitResult> hitResults) {
for (HitResult hitResult : hitResults) {
if (hitResult.getTrackable() instanceof Plane) {
return hitResult;
}
}
return hitResults.get(0);
}

ARCore constantly adjusts planes and feature point positions, which may lead to coordinate changes and shifting by a certain offset. This is where anchors come to rescue. An anchor is used to pin a position, where a virtual object is to be drawn, in relation to a HitResult. By relying on anchors instead of the world’s coordinates for object rendering, we can avoid major tracking issues and drifting effects.

@Override
public void onDrawFrame(GL10 gl) {

List<HitResult> hitResults = frame.hitTest(position.x,
position.y);

if (hitResults.size() > 0) {
HitResult hit = getClosestHit(hitResults);
Anchor anchor = hit.createAnchor();
}
}

Performance:

Here are some tips that can help you stay on top of your app performance when building AR experience:

  • rely on a plane if available, otherwise the closest feature point
  • don’t use the world coordinates to position a virtual object, anchors will do their job well
  • try not to create more than 12 anchors
  • avoid vibration as it might cause tracking errors
  • lock orientation

On rotation a session needs to be paused and then resumed. In such cases we don’t have any guarantees that all virtual objects and geometry data that have been tracked before, will be restored. ARCore will do its best to restore them, but it is still not at the expected level. Orientation lock reduces a risk for a user to land in an undesired state when some virtual objects can disappear on rotation.

  • set a max distance threshold to move virtual objects within

If virtual objects in your application can be moved around and scaled endlessly, it is a good practice to set a max distance value. Otherwise, virtual objects can be minimized to such an extent that they cannot be interacted with or even visible when users start moving them around.

private static final float OBJ_MAX_DISTANCE = 4f;private void moveActiveObject(Session session, Frame frame, 
MotionEvent event) {

List<HitResult> hitResults = frame.hitTest(event);
Anchor newAnchor = null;

if (hitResults.size() > 0) {
HitResult hit = hitResults.get(0);
newAnchor = hit.createAnchor();
}

if (newAnchor != null) {
Pose cameraPose = frame.getCamera().getPose();
Pose anchorPose = newAnchor.getPose();
float distance = getDistanceBetweenPoses(cameraPose,
anchorPose);
if (distance > OBJ_MAX_DISTANCE) return;

activeArObject.getAnchor().detach();
activeArObject.setAnchor(newAnchor);
}
}
  • give ARCore and camera some time to calibrate and collect as much information about the environment as possible before letting users interact with AR experience

As an option you can provide a transparent overlay informing users that they need to pan the phone to the left and right to get the most out of the experience. Not doing so may cause some drifting effects and objects floating around.

ARCore has some limitations and imperfections at the moment, and it is in active development stage. White plane surfaces with no texture are not detected, verticals planes are not supported, directional light is not an option. But all of these limitations are being worked on and hopefully will be resolved soon. Overall, the API is developer friendly and really easy to integrate. ARCore looks pretty promising to take a firm stand in the AR Android community.

If you would like to get a full picture of ARCore integration with OpenGL and go through step-by-step AR experience creation, please check out the talk.

--

--