Sharing Augmented Reality Experiences with Google Cloud Anchors in Unity3D

If you watched Apple’s WWDC 2018 and Google’s I/O ’18, you may have gotten really giddy like me seeing the seamless augmented reality (AR) demonstrations between multiple users.

Imagine the new wave of apps and games you could create:

  • Chemistry lab simulations with live input from students.
  • Live house decorating and planning with friends and family.
  • Teamwork-based games with RPG elements. For example, one player could have a shield and defend his or her fellow teammates from projectiles.

For anybody who’s worked with augmented reality so far, this is a big first step into creating shared, multi-user AR experiences.

Previously, AR on smartphones was limited to the user starting the AR session. That’s because each time the user opens the camera to scan the physical environment, the AR session plants an anchor that sticks to distinct visual features found only from the frames of that user’s screen. What another user sees could physically be the same but contrastingly different, and without any way to share that information with each other, it wasn’t possible for any two phones to know that they’re looking at the same thing.

What’s New In AR (Google I/O ‘18)

Sharing Anchors

This year at WWDC 2018 and I/O ’18, Apple and Google, respectively, presented a solution—sharing anchors over the network to other users.

Whereas Apple’s method — ARKit 2.0 — is reserved for iOS only, Google’s method — Cloud Anchors — is platform-agnostic. Since I’m developing on Unity, I’ll be covering Cloud Anchors only.

Sharing anchors works by having one user first scan the environment. An anchor with a spatial mapping defining that physical space is created and hosted over the network. Subsequent users then scan that same environment in an attempt to match up their perspective of the environment to that of the host. Once the distinct visual features captured by the camera are matched up with the spatial mapping of the host anchor, the anchor is reproduced and planted into the environment.

Share AR Experiences with Cloud Anchors

Now all users in the same environment share the same anchor and therefore a common coordinate system to place and move virtual content.

Here are some examples to give you an idea of how it works

The following is a Unity sample app provided by Google.

  • 00:01 P1(right) scans the environment and places an anchor. The Android bot game object is automatically instantiated at the anchor’s transform (position, rotation, and scale) to visually represent the XYZ coordinate system of the anchor. A room number is also created for this AR session.
Note: The Android bot is facing backwards (negative Z-axis).
  • 00:06 P2 (left) clicks on the Resolve button to join the AR session using the room number provided by P1 and the IP address of P1 that the room number is associated with. This is needed since any one user can create multiple rooms for the same physical environment.
  • 00:25 P2 moves around to scan the environment in an attempt to resolve the anchor. As distinct visual features are captured by the camera and uploaded to the cloud, the server attempts to match them up with the spatial mapping of the host anchor.
  • 00:36 Only until P2 moves into roughly the same position as P1 when P1 hosted the anchor does the server finally match up the visual features of the environment to the host anchor’s spatial point map. The host’s anchor is resolved and placed into the environment. An Android bot is instantiated at the anchor’s transform. Notice that the Android bot is facing the same direction in that physical space as the Android bot on P1’s phone.

Here’s a live demonstration of a game using Google Cloud Anchors from the Google I/O ’18 event. Start at the 29:50 mark.

  • 29:50 James (blue shirt) scans the floor and hosts an anchor. A game marker is instantiated in his game (not shown). Because James has already created a spatial mapping of his environment, he’s free to move around and place his game marker farther down the carpet, as will be shown later at 30:15.
  • 30:10 Eitan (black shirt), while standing in the same position and orientation as James when James first hosted the anchor, scans the floor to upload distinct features of the same environment. Eitan waits for the server to resolve the anchor.
  • 30:15 The anchor is successfully resolved on Eitan’s screen and the game starts. Each player’s game marker is instantiated into the scene. Because they’re also using a real-time backend service, both players can see each other’s game markers move live.

By now, you’re probably thinking: That’s great, but how do I actually implement the anchor in Unity?

This is the fun part and it’s actually quite straightforward: Once you’ve hosted or resolved a cloud anchor in your game, you treat the anchor as the new world coordinate system by childing all instantiated game objects—enemies, bullets, buildings, etc.—to the anchor’s transform. Then, using a real-time backend service, you share each game object’s relative transform, such as the transform.localPosition and transform.localEulerAngles, with the other players in the session. Because the anchor’s transform is the same for all the players in the session, the placement and movement of all game objects in the scene relative to the anchor is the same for every player.


Diving into the Code

You can get started by following the instructions listed on their Google ARCore page for using Cloud Anchors in Unity.

Once you’ve added the GoogleARCore package to your Unity project, go into your projects folder and open the example scene called CloudAnchor. You’ll find the CloudAnchorController game object that holds the CloudAnchorController.cs script. This script handles the bulk of the work for hosting and resolving anchors.

Hosting

public void OnEnterHostingModeClick () {}

This method is called when the user clicks on the Host button. A randomly generated room number is created and to be assigned to this AR session. The current mode is set to ApplicationMode.Ready, so that in the Update() function the host user can touch on the phone screen to place an anchor in the environment. Once an anchor is placed, the following method is called to upload that anchor to the cloud.

private void _HostLastPlacedAnchor () {}

During this step, the CloudAnchorController submits the anchor to the server along with the room number and the IP address of the host player. If successful, the anchor is saved into the cloud with a unique cloud anchor ID associated with the IP address and room number.

Since the IP address is a unique identifier for the host user, that means one user can create any number of rooms for the same physical space. Other users who want to join a specific AR session just need to have the room number of that session and the IP address of the host that the room belongs to.

Resolving

Any subsequent user that wants to join an existing AR session resolves the hosted anchor.

public void OnEnterResolvingModeClick() {}

This method is called when the user clicks on the Resolve button on the UI. The “Resolve Room” box pops up, requesting the user to type in the room number that s/he wants to join and the IP address of the host.

public void OnResolveRoomClick() {}

Once the user finishes inputting the room number and IP address and clicks on the “Resolve” button, this method is called to check if the inputted room number associated with the IP address still exists, since Google cloud anchors are removed after 24 hours. If the room exists, the unique cloud anchor Id associated with the anchor of that room is extracted and passed into the following method.

private void _ResolveAnchorFromId (string cloudAnchorId) {}

During this step the user scans the environment, capturing distinct visual features and sends them to the server. The server processes those features and attempts to match them up with the spatial point map associated with the cloud anchor. Once successful, the cloud anchor is resolved and added into the scene.


In my next post I’ll show a couple multiplayer examples using Google Cloud Anchors and GameSparks for its real-time services.