When Apple announced the iPhone back in 2007, many people bristled at the idea of a touchscreen. They’d tried touchscreen technology before — those horrible screens embedded in the back of airplane seats, where the touchscreen didn’t respond to touch correctly. That technology had been built in the way you might build a desktop UI, even expecting a precise point of input, like a mouse. I remember once watching a row of people delicately touching those screens with their fingernail — a technique they’d figured out was more successful at making it work. It took until the iPhone for innovation to occur, where Apple developed a standard for how a touch interface should actually work.
Augmented Reality is in a similar place as touchscreens were before the iPhone. It’s a new type of user interface, for which there are still no design guidelines or technical frameworks, so when starting out it can be really tricky to figure out how to build something which naturally works really well.
I got into AR two years ago, initially developing a popular open-source library which pioneered location-based AR, and more recently starting Dent Reality, to bring AR navigation to shopping malls and other large spaces. Over time, we’ve explored many concepts and ideas, developing some of the most viral and compelling examples of AR, from Point of Interest, to Retail AR.
I want to share some of what we’ve learnt about AR, and some of the high-level concepts we consider when building a compelling experience.
When people are considering building an AR experience, they think about wearable AR glasses of the future. When you consider the form factor and interaction model of mobile AR, compared to wearable AR, there are some key differences:
- Mobile AR doesn’t take up your entire view, it’s a viewport that you can move around.
- Its primary user interface is not AR, but standard touchscreen UI, with AR as more of a supportive element.
- The device is naturally held lower down, at around waist level, with the screen facing up.
As I get into some of these concepts, I’ll talk more about form factor, and how it can play into our design considerations.
For this post, I want to explore the potential for AR to enhance the museum experience. People visit museums to learn, and AR provides a great opportunity to elevate that visitor experience, enabling a deeper understanding of each exhibit. We’ll focus on a single artwork as our example.
AR enables us to display relevant content in a way that just isn’t possible in the physical world. This large painting is accompanied by a tiny card of text, which you’d have to lean in close to be able to read. So I want to consider how we could use AR to highlight some specific details, and tell a story about the painting’s background, while remaining unobtrusive and adapting well to the mobile form factor.
If a regular phone UI required us to pinch and pan around to see elements such as buttons and text, it’d be a poor experience. It’s much better with content displayed at a reasonable size on-screen, and so obvious that we don’t even think about it. We should ensure our AR content is always legible too.
For this demo, and for the other work we do at Dent Reality, we built a constraints system which allows you to add a regular iOS UIView to the AR scene, and have it scale based on the user’s distance, so that it always displays at an appropriate size.
It could also be nice to go a step further, and have it scale more dynamically, to give a sense of distance. You could even disclose more detail as the user gets closer.
It’s also quite common for AR content to be oriented to always face the user as they move around, to maximise legibility. However, the choice is yours — I’ve seen many cases where it’s more appropriate, or more interesting, if it doesn’t. In this example, the pulsing radar beacons lie flat against the artwork.
Screen Space vs World Space
When building AR experiences, I take a lot of inspiration from video games, where 3D game developers have been exploring many of these ideas and challenges for decades. 3D games and AR often have a similar paradigm:
- A world within which the player exists (in gaming, the world is virtual, but it’s often built to represent a physically realistic environment)
- Game elements, such as virtual arrows and pulsing points of interest, which are anchored within the world, with the purpose of providing visual cues to the player. Within game development, this is referred to as “world space” content.
- Regular on-screen UI, known as “screen space”, which is used for displaying health stats, and menus.
When building for AR, there’s a temptation to think that everything needs to belong within world space. However, it often makes more sense to use screen space for displaying detail. An advantage of mobile AR, vs wearables, is that we’re able to use the screen to display a wealth of information, without any concern about obscuring our vision (after all, we can still see the real world outside of the screen, with our eyes).
I’m also using focus as the input. It’s not easy to tap things on-screen with one hand, while holding up the phone with your other hand. So instead, I’ve added what’s apparently called a “pipper” (a dot in the center of the screen to indicate our focus). I actually moved the pipper further up the screen because that feels more natural than it being centered.
When we focus on a pulsing beacon, and hold for a short period of time, it unfolds the content, displaying it in screen space. Then we can just tap anywhere on-screen to dismiss.
There’s also an opportunity here for integrating audio guides. With current audio guides, a visitor would need to queue up the appropriate audio track, and inform the device as to what they’re looking at. With artwork recognition features in AR, that could potentially be triggered automatically.
Comfort as a Priority
When talking to people about the potential for AR, a common refrain I hear is “AR is bad, because people don’t want to walk around holding up their phones”. I agree — holding up a phone in front of you for a sustained period of time is uncomfortable, both for physical reasons and social reasons. So we shouldn’t try and force users into that behaviour.
People will hold up their phones momentarily, usually to take a photo or a selfie, and then lower it to a comfortable position as soon as they’re done. So how can we take advantage of that behaviour within our museum experience?
What if the person holds up their phone, it recognises the artwork, and when they lower their phone, we virtually re-create the painting right below where it actually is, with the same annotations in-place for them to interact with? So the visitor can have the same immersive experience, but with a more natural form factor, and can still refer back to the real painting in front of them.
These are just some ideas and concepts we’ve explored at Dent Reality, on our journey to develop a breakthrough AR navigation experience. As more people are introduced to AR, it’s important that those experiences provide value, while building with a great user experience in mind.
We’re open-sourcing this project — it’s on GitHub now. We’ve built a museum simulator so you can try it out at home. If you’re able to visit the Maritime Museum in Greenwich, London, you can run the app there and try this out in person.
If you represent a shopping mall or other large venue (airports, hospitals, campuses, museums..) and would like to be the first with our Indoor AR navigation technology, you can apply for Early Access on our website.
Indoor Maps and Augmented Reality Navigation — Dent Reality
A next-generation solution for indoor navigation. For shopping malls, airports, hospitals, campuses and more. Accurate…
You can follow me on Twitter, where I frequently tweet my thoughts on AR, and share AR demos of our work.
As I was in the process of writing this, I came across this fascinating post from Facebook Design, covering some of the same concepts, plus some other ones more specifically related to VR.
There’s also this recent talk from Google I/O on their design considerations when building Google Maps AR. We’ve certainly thought about many of the same problems in our own work.