Sharing a Virtual Object and Placing it. ARTris — Part #1

Grace Elina
Artris Code
Published in
7 min readJan 29, 2018

This article is part #1 of a series of articles on ARTris. A multiplayer real-time 3D AR Tetris game built with ARKit, Node.js, and Firebase. If you’re new to this series, we recommend you begin with the introduction. Fork the iOS Client and the Game Engine repos to start experimenting.

We wanted to create a multiplayer game that allows players to share a virtual object on multiple devices. Right now, ARKit and ARCore do not support this feature. To make this work, we need to know the relative position and orientation of all the devices. A common solution is to use a QR code as a landmark on the ground, but no one carries a QR code around with them!

Before explaining how we worked around this problem, it’s important to know some of the keywords we will be using throughout this article. Read through them if you are not familiar with the ARKit or SceneKit terminologies.

SceneNode

A structural element of a scene graph, representing a position and transform in a 3D coordinate space, to which you can attach geometry, lights, cameras, or other displayable content.

Anchors

The list of anchors representing positions tracked or objects detected in the scene.

HitTest

A 2D point in the image coordinates can refer to any point along a 3D line that starts at the device camera and extends in a direction determined by the device orientation and camera projection. This method searches along that line, returning all objects that intersect it in order of distance from the camera.

Feature Points

These points represent notable features detected in the camera image. Their positions in 3D world coordinate space are extrapolated as part of the image analysis that ARKit performs in order to accurately track the device’s position, orientation, and movement. Taken together, these points loosely correlate to the contours of real-world objects in view of the camera.

Before the game starts, we added an initialization step to work around the multiplayer localization challenge. All the players would place a virtual grid at the same spot in the real-world on their own devices. This step makes it possible for us to calculate the position and orientation of the devices relative to each other.

grid initialization step

As part of this solution, we need to place a virtual object on a real-world surface. The rest of this article explores Apple’s solution to object placement in their demo app for ARKit.

We are working in an augmented reality environment. If we want to move an object around, we are not moving it on the screen; we are moving it around in a 3D space which gets projected onto a 2D screen. The screen is just a medium to interact with the 3D space. For example, if we want to move a virtual object to a specific location on the screen, we first need to find the corresponding 3D position that projects back unto the specified 2D point, and move the object there.

worldPosition is the function that encapsulates all this logic. As the players move the devices around to position their virtual object, we update the mapping between the centre of the screen and the corresponding 3D point in the space to make sure the virtual object is always projected back unto the centre of the screen.

We call worldPosition in frameDidUpdate to make sure the mapping is updated with the latest frame.

worldPosition function in frameDidUpdate

For the grid position to be updated gradually with every frame update, the function performs the following five steps to find the most accurate position for the virtual object.

Step 1 — plane detection

The first step is to perform a hit test against an existing plane anchor. This is the best solution as this means that a plane anchor has been found directly along the hit test line from the center of the screen. A plane anchor would be the best result because it represents a real world plane.

step 1

Step 2 — hit test with features

If we don’t find a plane anchor, the next step is to perform a hit test against all the feature points in the scene.

step 2

Instead of searching for anchors directly on the hit test line like in plane detection (step 1), this step extracts all the notable features from the scene and finds the feature point closest to the hit test line.

First, we need to draw a hit test line (ray). This line extends from the camera node to the 2D point -corresponding to the centre of the screen- projected unto the far clipping plane, which is the most distant plane in the camera view.

Once we have obtained this ray, we can collect all the feature points from the environment.

collect raw feature points from the environment

Now we are ready to make calculations for each collected feature point.

The three parts are as follows

Part 1 → Find the shortest distance between the feature point and the ray.

Part 2 → Find the angle between the feature point and the ray.

Part 3 → Find the projection of a line extending from the camera node to the position of the feature point unto the ray.

The last two parts are used to filter out any feature points that are not within a specified distance or angle range. After filtering, sort the feature points by their distance to the ray. This where the shortest distance calculation in part one comes into play.

Part 1

Here we find the normal (shortest) vector between the feature point and the ray by finding the length of the cross product between them.

shortest distance between ray and feature point

Part 2

Next, we find the angle between the ray and a vector extending from the camera node to the feature point. If this angle is too large, we discard the feature point.

angle between originToFeature and ray

Part 3

Last, we again use a vector extending from the camera node to the feature point, but this time we find it’s projection unto the ray. The length of this projection is a measure of how far the feature point is from the player. The feature point cannot be too close or too far away and if it is, we once again discard it.

projection of originToFeature unto ray

Once we complete all three parts, if valid feature points are remaining after the filtering process, we sort the points by their distance from the ray (the shortest distance calculated in part one). The closest feature point is then returned.

Step 3 — hit test against horizontal plane

The next step is to perform a hit test against a virtual infinite horizontal plane. This step can only be done if the virtual object has been placed before. The virtual horizontal plane is made to align with the virtual object’s y coordinate. We find the intersection of the ray and this horizontal plane and return it.

intersection of ray and infinite horizontal plane

In our case, this step would be performed only if the hit test with features (step two) does not return a valid position. We are using the infinitePlane's default value which is set to false.

At this stage, there is an assumption that the object is probably latched unto some horizontal surface or notable feature anchor in the real world. Shifting its position along a horizontal plane would move it across the plane at a fixed height.

Setting the infinitePlanevariable to true would change the dynamics of the worldPosition function, and its new purpose would be to place a virtual object on a plane and update its position along this infinite horizontal plane. Even if a valid feature point is detected in the hit test with features (step two); we ignore them. We try to move the object only when we detect a plane (in step one).

step 3

Step 4 — return result of hit test with features

The fourth step is to return the position calculated in hit test with features (step 2). This step occurs if we failed to return a point in the previous steps.

step 4

Step 5 — unfiltered hit test with features

If all the steps above failed to return a valid position for the virtual object, the final attempt is to perform an unfiltered hit test against any feature point. This time, though, we don’t filter the feature points. We collect them from the environment and return the feature point closest to the ray.

step 5

Next

Tomorrow we will continue to explore the challenges we have faced in creating an immersive AR gameplay.
ARTris Part #2: Creating an Immersive AR Experience

--

--