Augmented Reality and User Experience
What is it that comes to your mind when you hear Augmented Reality?
Minority Report ? I think Iron man. My mind visualizes Tony Stark plucking holograms from mid air, throwing it around and doing all kinds of cool stuff with it. I bet every single person who saw that movie wished they could interact with technology like Mr. Stark.
It has been almost 10 years since the movie came out and we are playing host to the real emergence of Augmented Reality. We are actually capable of doing exactly what Tony Stark did, albeit with the help of small computers strapped on to our eyes. But as with most emerging technologies, AR has many problems that are preventing it from quick and wide adoption.
For the last 9 months, I have been part of a team at Arfront Technologies Inc who are actively trying to solve many of these problems and making augmented reality more accessible. Majority of my time was spent on programming the user interface and network modules for an AR application. So I figured I would pen down the philosophy behind designing for AR, the programming challenges we faced and how we resolved them.
Linking the many realities
The line between Augmented Reality (AR) and Mixed Reality (MR) is very thin. More and more people are ditching the term ‘ Mixed Reality’ in favor of Augmented Reality. I’m one of those people who is clinging hard to the unused terminologies and looking at them from different perspectives.
For the uninitiated, Augmented Reality is a layer of virtual objects that are not tethered to objects or the scene behind them. An example would be the Google Glass which simply displays your facebook feed or new text messages from your friends. It usually has nothing to do with what you see in front of you. It is often a wall of text or 2D user interface that provides the user with extra information. The Pokemon game is another great example.
Mixed Reality is almost like Augmented Reality, but the objects are tethered to some real world space coordinates. An example would be the Microsoft Hololens which allows you to place virtual objects in the real world and then walk around it and examine it from every angle possible.
Many of our applications now have a mixture of AR and MR and are collectively referred to as Augmented Reality.
Designing AR/MR for better UX
Designing for AR and MR is similar and yet different in many aspects. When we imagine a future with the holographic display, we imagine multiple screens hung in the air filled with graphs and raw data buzzing through every nook and corner. Well…At least that is how video games and movies portray our future.
However when it comes to real life, minimalism is the key. Look at the interface of your phone, computer or even the very website you are reading this article. They show only what the user wants to see and hide everything else until the user requests for it. While these mediums use the “less is more” approach to a cleaner aesthetics or a better user experience, minimalism becomes almost unavoidable in AR and MR.
Transparency is another big factor for designing for AR. The display should leave enough transparent space for the user to actually see what is going in front of him. The blue neon glow that is so familiar from the movies, if replicated in AR, is going to cause a roaring headache in exactly 15 minutes from using one of these devices. So, dark gray or pitch black background are mostly used for the high level of transparency in these devices. Testing the device in settings of varying light intensity as well as different sources of light is highly recommended when designing for AR.
Although the difference between AR and MR might seem quite small, designing for MR takes a completely different approach. Mixed Reality is all about camouflaging virtual objects as real world objects. This requires the objects to have a very high degree of photorealism as well as lighting that matches the real world lighting. Otherwise, they would stand out like pineapple in a pizza. This can be a bit hard to accomplish. On a brighter note, check out Apple’s ARKit or Google’s ARCore. They have made some very impressive strides in that direction.
Programming for AR and MR
Microsoft’s Hololens and Apple’s ARKit does a great job when it comes to Mixed Reality. This can be attributed to their closed proprietary ecosystem. They can optimize the application or the underlying image recognition and localization code to their specific hardware.
Arfront, Vuforia, and Wikitude (to name a few off my tongue) are moving in another direction. They are trying to bring AR and MR to the masses by extending support for a multitude of hardware and making their technology cross-platform. Sounds cool, but it is hard work.
At Arfront, we had even more challenges. We are developing a video conferencing application called Sensei that integrates Mixed Reality. This means we have to worry about making the performance hungry AR/MR code run on less powerful hardware and about providing a smooth user experience in terms of network connectivity and video quality.
When using augmented reality glasses, we had to make sure that the frame rate is high enough not to cause any visual anomalies. If the frame rate or network connectivity cause the virtual objects to jitter too much, it will result in visual discomfort or a headache to the end-user. Although I cannot divulge intrinsic details, we made good use of multithreading and weaved a delicate balance between quality and performance to make sure that the user experience stays positive. A high frame rate allows the virtual objects to stay tethered to its world space coordinates thus maintaining the illusion of reality.
I would say that Mixed Reality is still in its infancy. I often get to see the R&D team at Arfront brainstorming solutions for problems that are often caused by lackluster visual fidelity of camera or less than impressive hardware on our budget phones. Hats off to them for coming up with innovative solutions to make mixed reality accessible for us plebeians.
On a final note
As our hardware improves, our smartphone more powerful and our algorithm smarter, we will hopefully see Mixed Reality in all its glory. Combine it with haptic feedback devices we might see a veil of illusion that will start to merge with our boring everyday reality.
To a virtual future, I make a toast.