The Unanticipated Usefulness of Smartphone AR

Anthony Maës
6D.ai
Published in
7 min readApr 4, 2019
Visualization of the fields of view of the 8 cameras, 12 ultrasonic sensors and long range radar used by Tesla's Autopilot system.

Over a year after having left my role at Tesla, it still makes me smile to see Model 3’s in the streets of my small San Francisco hill. Those cars embark no less than 8 cameras, which, in Elon Musk’s mind, puts them one software update away from becoming full-fledged self-driving cars.

All those Model 3’s in my neighborhood are obviously manually parked. Their wheels are carefully turned towards the curb as required by the city’s traffic code in those steep streets. They don’t block driveways, or fire hydrants. And for the most part, they avoid the two-hour weekly street cleaning window and the $76 fine for overstaying.

Walking home, I was musing about how my former coworkers, and others in the industry, would go about designing the street parking piece of the autonomy puzzle, unleashing driverless cars in live traffic in search of a spot, and avoid the many pitfalls that would cost their owners tickets after tickets.

The car’s computer vision could parse all the clues left for humans, like signs, curb paint, driveways, etc. in order to guess where it can and cannot park. However, robots rarely outperform humans in sensing tasks, and humans aren’t even very good at following byzantine urban parking rules.

A more realistic approach is to map all the parking spots of the city, and the rules for each of them, leaving to sensing the task of safely driving to the nearest available one.

Tech companies prefer to own, if not collect, map data themselves. But not everything can be captured with LiDAR vehicles and neural networks. Driving rules, local exceptions to those rules, vehicle type restrictions, temporary detours must be captured and encoded manually. And who better than transportation departments to collect, update and distribute digital maps of their road infrastructure?

In places like San Francisco, that data already exists to some extent, enabling dynamic pricing for parking meters. It is in the public’s interest to improve the safety and effectiveness of autonomous vehicles, allowing them to act predictably like trains on tracks rather than rock-dodging Mars rovers.

We’re talking about a huge wealth of data, constantly evolving with construction and transit improvements. The collection and maintenance of such location-based data will require a crowd-source model, with a tool allowing field workers to efficiently compare it with reality and amend it on the spot if needed.

Visualization of Lyft Level 5’s map for autonomous driving

Where Mobile AR Comes In

This is perfect fit for the next generation of smartphone AR powered by 6D.ai, where an off the shelf device can capture rich 3D spatial data from one or many devices, and overlay a 3D virtual representation of infrastructure on top of the world, making errors stand out and easy to correct. 6D.ai makes this possible by outperforming GPS with its relocalizer technology.

It is easy to imagine similar possibilities in infrastructure projects, surveying, transit operations; anything for which location-based data is fundamental and hard to manage from an office, where updates take a long back-and-forth with field workers.

There are now two fundamental questions creators will have to ask themselves before building an AR experience:

How long is the user kept in AR mode?

How does the AR experience improve over the existing experience, if at all?

Answering them should help define the scope more precisely and predict whether smartphones are an appropriate platform for it.

Where Smartphone AR Works

The elephant in the room is the UX constraint of holding a mobile device and experiencing reality through a small screen. Disappointingly, if we learned anything from the past ten years, it is this: “magic window” type AR doesn’t enable the compelling long sessions of immersive entertainment we all dream of. Good news is, it doesn’t need to, and we’re barely scratching the surface of what’s possible.

Small aside: some people like to call immersive entertainment like 3D animations and games mixed reality, and narrow augmented reality down to professional tools and information overlays. I call them both AR since they’re fundamentally the same technology, as opposed to virtual reality, which erases reality altogether.

Niantic’s Pokémon Go with AR+ mode

Entertainment AR on smartphone needs to be one-off short and fun. A common UX design pitfall is ignoring the distraction of holding a phone and walking around. Every time the camera view shows up on screen, an invisible countdown starts in the user’s subconscious.

Currently, users are ready to put in as much time and effort as in a selfie before fatigue and the desire to move on to something more relaxing makes them hit the back button. We’re talking about probably less than a minute. However, user behavior can stretch for a compelling reward, as seen by Pokémon Go players walking and even driving to progress in the game.

Snapchat’s 3D bitmojis, Ikea’s AR furniture preview or Pokémon Go’s AR+ mode are all good examples of such bite-size novelty immersion. Notice that they’re secondary to their app’s core experience. Which is okay! An all-AR app is probably impractical as it would only stay open for a minute.

Wingnut AR demoing iPad AR at Apple WWDC 2017

On the other hand, high intensity action games like the ones showcased at Apple events are difficult to pull off on mobile. The augmentation requires a significant amount of open space in order to fit the game map in a way that makes units large enough to see and control. But on top of that players have to move around a lot to navigate that map, awkwardly slouching, only seeing as much as their screen covers, getting their eyes closer to it for immersion. It’s a cool tech demo, but as clunky as surfing the web with a Wii remote.

The mobile enterprise …again

We’ve been talking about the mobile enterprise for years, but it is recurring again with AR. We’re seeing smartphone AR’s first significant adoption taking place in professional environments for infrastructure field work. With benefits ranging from productivity, scale and cost effectiveness, workers are willing to manage through the small user experience issues of holding the phone at one condition: that it brings substantial productivity improvement over the existing workflow.

Apple’s Measure app for iOS
  • Tape measure apps are a great and simple example, improving over the physical object in terms of ease and speed of use, unlimited length, reach to high places, and keeping many measurements on screen at the same time. They come with the caveat of precision and accuracy, but still keep an advantage for high volume work that tolerates some error, like measuring a floor plan in order to lay pipes or cables.
  • To the contrary, use-cases like indoors navigation with cute characters floating along the path are frequently prototyped on smartphone AR, but often fail to improve over 2D maps while making the experience significantly worse by forcing users to walk holding their phone in front of them. As emphasized earlier, the augmentation has no intrinsic value, but can enrich a larger app, when designed thoughtfully to give quick visual context, analogous to what Street View is to Google Maps.

AR glasses of the future might relieve some of the limitations of smartphone AR by making long sessions more comfortable and immersive, but that will come initially with a trade-off in terms of battery life, cost and computing power.

Ironically, smartphone AR is the best platform to prototype immersive concepts intended for AR glasses, granted it is capable of high fidelity spatial mapping, as made available by 6D.ai: it’s much cheaper than HoloLens or Magic Leap units, especially for testing multiplayer applications, and comes with a much richer developer tool set. For instance, a prototyping tool like Torch (a 6D.ai partner) makes it even easier.

There is also definitely room for beautiful indie experiments, the same way Chrome Experiments paved the way for today’s sleek interactive articles.

Conclusion

Smartphone AR is making an impact on businesses today and 6D.ai is involved with multiple pilot projects building practical and useful AR features for the Core UX in various verticals for the technology, while ticking all the boxes of a successful use-case.

The key to AR mobile adoption is short, easy sessions providing extra value within the already familiar experience of an app, or a big improvement over existing flows for professional use-cases. 6D.ai enables those with multiplayer, meshing and cloud services.

We’re barely scratching the surface of possibilities, and we hope to inspire developers and creators by giving them a new perspective into the motivations of users and the technology’s productivity potential.

Have an idea? Join the 6D.ai beta and start building today!

--

--