Why Google Cloud Anchors doesn’t deliver on the hype

Alberto Taiuti
Inborn Experience (UX in AR/VR)
4 min readMay 10, 2018

--

At the Google I/O 2018 conference, Google announced the availability of its Cloud Anchors library.

Google Anchor enables local multiplayer experiences, but nothing more.

This library allows sharing of an AR session between two AR-enabled smartphones (for now, until we also have glasses) so that two users can interact with the same AR content, placed at the same location, effectively sharing the same scene.

This is achieved by Google running the repositioning and persistence feature of ORB-SLAM 2 on their cloud servers. The apps send raw data captured by the devices, and Google’s servers calculate a shared transform for the two (or more) devices which is then used as an origin by both phones to place content with respect to.

One would think that this is huge; the AR cloud, AKA persistence, is one of the most requested, talked and anticipated features of AR, and many companies such as Placenote, 6D.ai and BlueVision are trying to achieve this by bringing to the market their implementation of such cloud. Some of these companies run ORB-SLAM 2 on the device rather than on the cloud like Google, but the underlying principle is the same: because by default ARKit and ARCore don’t give you access to the algorithm’s feature points, you need to run your own custom version and then integrate it on top of the existing AR libraries provided by the vendors.

However, these libraries are still very much in beta, so the fact that Google released their implementation, and that its Cloud Anchors library is available also on iOS is, on the surface, just amazing.

Google has also released a sample application for iOS here. However, the sample application is written in Objective-C (so it would be great if there were a Swift version too).

Now, why isn’t all this great?

First of all a bit of context:

Persistence, in the AR meaning of the word, is the ability for you to place content somewhere in an AR scene like you already do in many apps, with the addition of being able to find the content in the same place where you’ve left it if you come back to the same physical spot some time after. It’s analogous to you putting a poster up on a wall in real life, and then going away: when you come back to the same wall, you’d expect the poster to still be there, right?

Well, this feature is not available by default on ARKit and ARCore, so people have come up with custom solutions as I explained above.

However, at the same time, those custom solutions which I talk about above, fully deliver on persistence, whereas Google’s Cloud Anchors don’t, for the following reasons:

The Anchor data can only be accessed within one day of creation

So if you create an anchor for you or others to use, when you come back to it one day after, it will be gone.

Say you place a 3D model of a chair somewhere through AR in your house and anchor it; after one day, it won’t appear in the spot where you want it to be any longer.

The raw data sent to the servers is deleted after 7 days

Hence not only your anchor transforms will disappear, you won’t be able to retrieve even the raw data consistently.

At first, I was very excited: I have been waiting for a long time for someone to implement persistence on smartphones properly and had even started to implement my own version.

However, in order to fully achieve persistence, we need to able to consistently store and share feature points data and for it to be retrieved whenever needed. Without that, the only thing that Google Anchors enables is local multiplayer, which could already be achieved by using a marker or image as a starting anchor, without the need to run ORB-SLAM 2 on the servers and do fancy feature recognition.

I really hope that this is just the first step for Google Anchors, and that in the near future they will enable the storage of the anchors for an indefinite time and make it easy to share them. Until then, you can achieve the same with image anchors.

--

--