Taking the friction out of Mixed Reality (VR & AR)
In October last year I wrote a submission for the W3C VR Workshop entitled “How can we ensure WebVR is not an off-ramp for the web?”. My goal in that post was to call out our view that “spatial” or “immersive” computing (e.g. VR & AR) appeared to be developing into a silo or standalone experience. By contrast, we believe we should be thinking of the modern web as one integrated and multi-modal information space — not “legacy pages” over here and shiny new “immersive scenes”, etc. over there.
One great outcome from the W3C workshop was a consensus that WebVR should be thought of as an extension to Progressive Enhancement. Yet I was left feeling this didn’t go far enough. A lot of people expressed interest in the idea I raised of “one information space” — but a lot of people also said they found it difficult to clearly visualise how this might work in practice.
In this post I’d like to present our latest UX innovation which provides an elegant, yet tangible example of this “one information space” concept. In this post I’d also like to set this innovation in the broader context of “removing friction from Mixed Reality”. And it’s here that I’d like to start.
If you’re not familiar with Mixed Reality, this is simply the continuum upon which Virtual Reality and Augmented Reality sit. Here is a basic demo video showing Milgram’s Mixed Reality Continuum all running in a web browser using our open source awe.js library.
NOTE: The marker based tracking presented here will soon be replaced with the Natural Feature Tracking demo we presented at Augmented World Expo recently — and again all running in a browser. This will be part of our commercial awe.media platform soon. Combine this with the new announcement from Apple and you can see that Mixed Reality on the web is now a “real reality”.
The promise of Mixed Reality is purely magic, providing immersive and pervasive information that shows us a better world. But when we translate that into today’s technology, the hurdles you must climb over to get at these Mixed Reality experiences can be formidable.
In one mainstream “AR Browser” that was a downloadable mobile app we found there could be up to 27 individual steps between a user “finding out there’s an AR experience” they can try, through to actually “using that experience”. This is a long way from the magical and pervasive promise we all imagine for Mixed Reality.
Over the last seven years our focus has been on how Mixed Reality experiences can be created and delivered within standard web browsers, across a wide range of devices.
One of the key reasons we’ve pursued this strategy so doggedly is simply because the web is built upon the concept of the “link”. When TBL first mapped out a Hypertext system for CERN it was clear how important links were at removing friction.
With our awe.media platform you can simply visit https://try.awe.media and setup a free account in your browser and get started creating right away — in your web browser. This is not just me pimping our service to you — this is also me demonstrating how we’ve removed the friction for “creating” Mixed Reality.
Then when you’re ready, you can share your published Mixed Reality experience with a single web link which can be opened by over 3 Billion capable browsers. No going to an app store. No downloading or installing an app. No searching for the channel or layer of information you want. Just one direct web link that deep links to just the part of your experience you want to share. This is a critical part of how we removed the friction for “viewing” Mixed Reality, cutting those 27 steps down to 1.
So now you’ve got the ability to seamlessly share your creation with the world. But what will you actually put in your web based Mixed Reality creation and how can your end-users use them?
As we discussed above, Mixed Reality is a broad continuum. Our platform lets you create views that fill that Mixed Reality continuum, but for this example we’ll focus on Locative AR.
By “Locative AR” I mean an awe app that enhances your camera’s view of the world around you, by overlaying digital and interactive content. This content is laid out based on where you are in the world, creating an illusion that the content is mapped to the real world around you.
Now this is where the “one information space” concept comes in. It’s reasonable for people to focus on the “spatial” presentation of information as that’s the novel and new part here. But we believe this should be just one of at least 3 key modes.
This presents HTML based information about this view. This could be anything from a simple list of the digital content in the current view, through to a rich HTML5 web app that lets you capture and edit content, or more.
This presents a 3D VR or AR scene in either single “mono” (e.g. mobile, tablet, computer) or a “stereo” view (e.g. cardboard, hmd, smartglasses) where the digital camera’s pose is updated “live” by the device’s orientation sensor data (e.g. the scene updates as you look around) to create the illusion of immersion.
This presents a 2D map presentation of the digital content in the current view.
These 3 modes allow you to explore the same information, presented using the mental model that best suits you in your current situation.
But best of all, we’ve gone a step further and removed the friction here too.
In the bottom left corner of every awe app you’ll see a small orange circle we call the “doodat”. If you select this on a mobile device you’ll see it opens up to show a menu. The bottom panel on the left of this menu has 3 options that control your view. You can choose the “live” mono view or the “stereo” view as discussed above. And the third options is labelled “switcher”.
If you select this, you turn on the new “motion switcher” and now you can seamlessly navigate through these 3 different modes, blending them into one information space as you see fit. All you need to do is change the orientation of your mobile device.
Below is a simple infographic that shows you how the 3 modes and orientations work.
Of course, it’s one thing to see a concept like this laid out in a diagram. What you really want to see is how it works in practice. Below is a video that shows you the “motion switcher” working on a standard Android device using a standard version of Chrome. No plugins, downloads or installs required. And you can even configure this feature to be on by default if you want as well.
Now you can see what Mixed Reality without the friction really looks like!
Tap on a link — BOOM!
Change your orientation — BOOM! BOOM! BOOM!
You can literally weave your own multi-modal Mixed Reality as you move about the world.
But don’t just look at the infographic and watch the youtube video. Give it a go for yourself. Here’s a quick overview of how easy it is to create Locative AR for the web.
Now you can try out the “motion switcher” using your own content and ideas. It’s free to try too. We wouldn’t want any friction there either!
And of course, we‘ll have updates coming soon about our new Natural Feature Tracking solution and more. Follow us on Twitter or like us on Facebook to stay tuned...