The Triad of Technologies for Location-Based Mixed Reality

Denis Hurley
5 min readJul 22, 2016

--

Much has been written in the few weeks following Pokemon Go’s explosive success, largely focusing on the augmented reality feature. While AR is a key element to the application’s popularity and technological significance, the successful implementation of the other components, spatial-contextual awareness and situated media, are just as exciting. This indicates the possibility of broader adoption of exponential technologies and more seamless interactions between the simulated and the real.

Irrespective of the advanced technological components, Pokemon Go’s well-executed product development and release were essential groundwork for the inclusion of innovative, infrequently-used technologies:

  1. Screen-based mixed reality
  2. Spatial-contextual awareness
  3. Situated media

Screen-Based Mixed Reality

Mixed reality includes both virtual reality and augmented reality. Google and Samsung have recently had success in using smartphones to power affordable VR experiences. Screen-based AR, used in Pokemon Go, is the kind of see-through augmented reality that uses a tablet or smartphone as opposed to a headset like Glass or HoloLens. Screen-based VR is new concept, whereas screen-based AR has been built into main-stream applications for seven years. In August of 2009, Yelp “snuck” the monocle feature into their iOS application. Having access to Android cameras sooner, Layar, a pioneer in augmented reality, was built into applications even earlier.

Yelp’s Monocle Feature

Since then, screen-based AR has struggled to gain popularity beyond face-swapping applications and the features included in image-sharing applications subsequent to MSQRD’s success. Now that Pokemon Go has gone viral, many companies with a vested interest in screen-based AR’s continued popularity tend to overstate this feature’s value at the expense of all else. In the long run, this is a dangerous approach. The higher the peak of inflated expectations, the deeper the trough of disillusionment. Screen-based augmented reality can add tremendous value in the right situation, the right user, and the right content. Similarly, when combined with location-based technologies, screen-based VR can be greatly enhanced. The fact that both are screen-based is significant is that they are (for now) much more affordable and portable than the alternatives.

2. Spatial-Contextual Awareness

Like screen-based augmented reality, applications that employ this technology have grown in number since it has become more and more feasible. However, fear of who else might gain access to our location has tempered the excitement induced by the quality of life improvements that could be introduced.

Spatial-contextual awareness combines the user’s location, activity, and surrounding objects to deliver relevant information or experiences. An excellent example is the new Nike+ running application. The app tracks the user’s route, heart rate, and pace; it records the weather, elevation, and time; and it enables the user to save the shoes worn, overall feeling, the type of route, and additional notes. The most significant improvement in this application, in my opinion, is the auto-pause features. For a runner who prefers street routes, manually pausing and un-pausing is a hassle. Nike+ is tracking my movement, so it knows when I’ve stopped at a light and when I’ve begun running again.

3. Situated Media

“Situated media,” as defined by Corey Pressman, “is an experience in which content is delivered to users based on their specific location or other triggers such as time, weather, and user heart rate.” In other words, the delivery is based on the user’s context. This is important for location-based mixed reality because it enables content creators to fix virtual media to a physical location and make it discoverable in relevant situations. (Situated media is a descendant of geocaching, in which players use GPS to find physical caches in the real world.)

Cornbread mobile app, from Neologic

Neologic, run by Pressman and Jaime Gennaro, developed Cornbread, in which users can leave “crumbs” anywhere in the real world. Other users can unlock the media left behind, such as text, photos, and videos, once they are close enough.

A significant correlated technology is location-based triggering. For years, many companies have been trying to encourage users to obtain additional information or experiences with the use of marker-based triggers. For reasons outlined in my summary of the QR code experiment I ran in 2011, this is simply too much work for too little payoff. A location-based trigger alerts the user to situated media.

Bridging the Virtual & Physical Worlds

Like many exponential technologies, the common thread between screen-based mixed reality, spatial-contextual awareness, and situated media is the merging of the virtual and physical worlds. Individually, they have been available for years, but not until Pokemon Go has one application successfully demonstrated how each one of the three can significantly improve the experience of the other two.

Where To Next?

It is likely that applications which enable social sharing, such as Cornbread, will be the next to gain popularity. However, uses for education are immediately obvious. Take, for example, Field Trip, which was also developed by Niantic, creators of Pokemon Go. An application that was ahead of its time and appropriately released a version for a piece of hardware that was also ahead of its time, Google Glass. Field Trip for Glass used the devices excellent notification system (it was not nearly as intrusive as you might imagine) and displayed information about places and things nearby. (Field Trip is still available for iOS and Android.)

Field Trip on Google Glass

While we wait for head-mounted AR headsets to become socially acceptable, notifications on smartwatches perform very well and would free the user up from obsessing over non-wearables smartphones. However, wearables have not yet reach the ubiquity of smartphones. Once they do — and they will — worlds will collide.

The real power of the triad is in how it connects not only the virtual and physical experiences, but also users with other connected users and devices. As more of us begin to create and share our own content, as more of our surroundings become part of the “Internet of Things,” and as our own bodies become more seamlessly connected to digital counterparts through biosyncing, our hybrid reality will become an increasingly-more rewarding and engaging environment to explore.

Like this? Please tap the little heart or share this with your friends & colleagues.

Don’t like it? Please leave a comment — I always welcome constructive feedback.

--

--

Denis Hurley

Equal parts virtual and physical. Perpetually in beta.