Craig Federighi at WWDC18

Apple steals the AR scene at WWDC18

Alberto Taiuti
Inborn Experience (UX in AR/VR)

--

This year’s WWDC was packed with AR-related content. Apple singlehandedly pulled the rug from under some of the main AR startups working in persistence and content sharing, and more.

During the short interlude of the keynote dedicated to AR, Craig Federighi introduced ARKit 2 and showed off a list of new features which will be available with iOS 12.

By far, the one that got my attention the most was persistence. That’s right: Apple announced it is working on persistence. Presumably, they are starting with small-scale persistence.

What does that mean and why is this important? If you are familiar with the AR cloud development scene, that means that Apple is entering the same market in which companies like Placenote, 6D.ai and Ubiquity6 are also competing. Since Apple owns the ARKit platform, the experience will probably work very smoothly and will be easy to develop with.

With this addition it will be possible to develop native applications which persistently place AR content in real-world scenes, and the possibilities enabled by this feature are endless. A possible use case mentioned during the keynote was placing persisent content for an interactive, AR-based classroom teaching app. Kids could use their ClassKit-enabled devices to then interact with content directly and exactly where it is supposed to appear in the class.

From the documentation, it seems like it is possible to save the map and achieve full persistence. From Apple’s docs:

Persistent AR experiences. Save a world map when your app becomes inactive, then restore it the next time your app launches in the same physical environment. You can use anchors from the resumed world map to place the same virtual content at the same positions from the saved session.

The second, very exciting feature announced was 3D object recognition. That means that you will be able to use 3D, real-world object as markers, similarly to what is already possible to do with images (aka ARImageAnchors) in iOS 11.4. Vuforia already offered this feature, but this was not an iOS-native feature.

Lego showcasing 3D object recognition at WWDC18. Pic by Oscar Falmer

Lego showed an impressive demo which introduced the 3D object recognition feature by using a real Lego house as the anchor for additional AR content. Whit this addition, a myriad of new apps which augment real objects are possible and I look forward to finding out what the process is to load a 3D model to use as an anchor. Possibly, one will be able to add the 3D model to their XCode project similarly to what happens for ARImageAnchors.

Third, Apple also announced the ability to have shared AR experiences between phones similarly to what offered by Google Cloud Anchors. It showed off a video demo of this feature and announced that an XCode template project will be available soon to learn how to use this new ARKit API.

All of this is topped up with a new file format, USDZ. This file format allows to easily package an AR scene/experience in one file. Think of it as a SCNScene on steroids. It not only encodes geometry, but transforms and animations too. This goes hand in hand with the newly added persistence so that one can deliver packaged AR experiences to a specific location.

Overall, ARKit 2 looks really exciting and the development community will come up with amazing use cases for the features introduced by it. Apple showed that it has at least an understanding of what is necessary to achieve immersive AR experiences and to fully grasp the potential of Augmented Reality, and is going all in.

I am genuinely excited about what is now possible to be created using iOS 12 and ARKit 2 and look forward to getting my hands on the new tools.

--

--