Leveraging ARCore Cloud Anchors API by Google to bridge the digital and physical worlds

How Google’s ARCore Cloud Anchors API can power your digital augmented experiences to take them to the next level.

Dean Giddy
XRLO — eXtended Reality Lowdown
7 min readDec 15, 2021

--

Google’s multi-user cloud anchor flow. Credit: Google

Google’s ARCore Cloud Anchors API offers a means by which digital content can be easily anchored to the physical world, bridging that gap between physical spaces and digital twins and taking us closer to the “Metaverse”.

What are Cloud Anchors?

Google’s ARCore Cloud Anchors API contains its implementation of spatial anchors for augmented experiences. Spatial anchors are small data points that contain information about a user’s physical surroundings accessed from the device’s camera feed. From this information, a complex 3D mapping of the surroundings is generated and stored in the cloud by utilising feature points/colour/vertex and depth data from a user’s device (using the appropriate sensors). This then allows another user to compare their view of the world and the feature points it contains against those stored on the cloud allowing for exact digital mapping of the world down to centimetre accuracy! This allows for digital positional parity between both devices compared to the physical world.

How do they work?

For this, I am going to link to the official Google documentation because it does an excellent job of explaining it from a high level.

Essentially the cloud anchors flow is broken down into two steps, we have Hosting and we have Resolving.

Hosting

In order to digitally pin content to a physical location, an anchor has to first be hosted. This refers to the process of collecting feature information and instructing the ARCore Cloud Anchors API that you are placing an anchor there. To establish and host an anchor, feature points and camera feed information has to be sent to the cloud. To do this, the device’s rear camera must map the environment in and around the centre of interest from different viewing angles and positions. The ARCore Cloud Anchors API then creates a 3D feature map of the space and returns a unique Cloud Anchor ID to the device.

Resolving

When another user in the same environment points their device’s camera at the area where the Cloud Anchor was hosted, a resolve request causes the ARCore Cloud Anchors API to periodically compare visual features from the scene against the 3D feature map that was created. ARCore Cloud Anchors API uses these comparisons to pinpoint the user’s position and orientation relative to the Cloud Anchors.

Cloud anchors are also persistent too, which means that a user can leave and come back to the same physical location and re-scan the environment which will prompt a resolve on their device and the digital content will be aligned again.

Multiple devices viewing the same augmented content at different times of the year thanks to GCA. Credit: Google

ARCore Cloud Achors and Expo 2020 Dubai

We had the opportunity to work with Expo 2020 Dubai to offer an enriching digital experience for millions of on-site and remote visitors. Expo Dubai Xplorer sets a new standard for accessibility and inclusivity to large events and expands the World Expo’s vision of ‘​​Connecting Minds and Creating the Future’ into the Metaverse.

By using ARCore Cloud Anchors for hundreds of location specific activations around the site, visitors will experience entertaining and educational AR content relevant to wherever they are at Expo 2020 Dubai. AR activations including mysterious portals to far-off places, treasure hunts through history, and magical creature encounters are all aligned to real world locations with centimeter accuracy. This is currently one of the largest deployments of ARCore Cloud Anchors in the world.

Check out just a teaser of one of the awesome geospatial activations that were powered by Google ARCore Cloud Anchors! More on Expo Dubai Xplorer here.

What we achieved for Expo 2020 Dubai was on a massive scale (using Unity) and took years. Here’s something you can have a go at mastering in a much shorter timeframe..!

ARCore Cloud Anchors API in Unreal

Google has developed ARCore to open up its augmented reality tools and platforms for developers to leverage in real-time engines. Both Unity and Unreal have exposed plugins for ARCore development.

In Unreal, Google also offers a separate plugin that works in tandem with both Android and iOS devices called ARCoreServices. It contains a library of tools additive to that of ARCore, this is where we can find our ARCore Cloud Anchors API implementations.

The plugins can be found inside of the plugin settings of Unreal Engine 4.20 and above (depending on the ARCore version, prerequisites found here)

To demonstrate the ARCore Cloud Anchors API in Unreal, let’s build an application that allows users to decorate their room with floral items.

This example project is being developed in UE4.26 with a Samsung Galaxy S9+. ARCoreServices is a multi-platform supported plugin and therefore supports both iOS and Android applications.

For the purpose of this article, I will be skipping over the basics of getting up and running an ARCore project in Unreal. You can find great documentation on it here, or I would advise using the AR Template that is offered as one of the Unreal starter projects.

Similarly, for simplicity, let’s also work in blueprints for a flow that’s easier to understand.

Getting started

The first thing we need to do is configure all necessary AR sessions and configs to be able to access all the properties and information we need from the device. I created a handy function that takes care of our needed AR initialization and call it on BeginPlay() inside of a Pawn class named BP_ARPawn.

This will start our AR session feed with the corresponding config options and configure our ARCoreServices for us (which will open up the cloud anchor support).

To give some interaction to the application, a very simple widget user interface was created to allow the user to select from some items to decorate and personalise their space.

Now we can move on to the hosting and resolving implementations! For hosting there’s a couple of methods that we can leverage from the ARCoreServices plugin which plays nicely with Unreal’s first-party ARBlueprintLibrary. The main one is this:

ARCoreServices Create and Host latent action

This node is a latent action meaning that the task will join a queue and spin asynchronously until a completed response is fired off. This is absolutely perfect for our resolving flow as the time it takes to send feature information to the Google servers and for them to respond is indeterminate, and therefore we can simply allow the latent action to inform us of when the response is back.

The node takes an ARPin that you wish to host, this is going to be the building block of the anchor that you wish to host. For simplicity’s sake, I am going to create and attach each spawned foliage actor to an ARPin using the PinComponent() method from UE4s ARBlueprintLibrary. This isn’t recommended, Google advises reusing cloud anchors where possible to save on multiple resolving calls from the device, however for the simplicity of the demo it’s easier to create one per placement.

Convert a created actor/scene component into an ARPin to host

I will be utilising a simple version of AR hit testing to create our chosen foliage actor at the user’s touch, a neat guide to help with how to achieve this can be found here.

Check the response of the latent action and cache the anchor id if successful.

Before we can move on to the resolving flow, let’s talk about how we can persist these anchors between application sessions. Typically, this would be handled by some secure server backed infrastructure that you can query and communicate to when hosting and resolving anchors. However, to simplify this demonstration I will be utilising UE4s UGameplayStatistics save/load game library as it is both intuitive and easy to use (documentation can be found here). This, of course, is not a secure or recommended approach to persisting cloud anchors between application sessions, it is just being used for demonstration purposes to show the power of persistent cloud anchors.

Store all the hosted anchor ids in an array and use the game slot system to pack them into a Save Game Object reference which will be saved onto the device.

After hosting our anchors we cache our IDs in an array, this is so that we can save them onto the device for persistent AR experiences. This way, when we next open the application on the device we can load the anchor IDs and resolve them straight away, no need to replace all of our AR content!

Our cloud anchor IDs are important as that’s the only information (as well as the phones feature point information) that is needed to resolve our hosted cloud anchors! The cloud infrastructure handles the processing of the feature point information and compares it to the 3D mapping it has, it will then return the result to us for each cloud anchor ID.

Resolve anchor latent action from ARCoreServices

We can call the Create and Resolve Cloud ARPin Latent Action and pass in each of our cloud anchor IDs that we have cached on the device. We then can monitor the response from the latent action and if it’s successful it means that we managed to resolve an anchor in the same spot it was hosted! We can then spawn our chosen foliage geometry here to visualise that anchor.

I hope you’ve had a bit of fun trying this out!

3D Assets courtesy of Quixel

--

--