Spidey Sense will be the killer Augmented Reality app
We’ve all seen someone walking down the street with a phone in his/her hand, eyes glued to the screen, headphones in both ears, and with no idea of what’s going on around him/her. Inevitably, this person will bump into something or someone on the sidewalk, or even worse, walk right into traffic. This person (we can keep pretending this hasn’t been each and every single one of us at some embarrassing point) is completely unaware of their surroundings. Another term for this is lacking “situational awareness.”
I don’t think we can stop people from losing themselves in their mobile devices, so what if these devices could give you back your situational awareness when you need it and help keep you safe? What if they could sense the world around you, understand how you’re interacting with it, and give alerts? Even better than an alert, what if it could give recommendations on what to do next, and help you perform certain actions along the way? Imagine seeing that same person from before walk right into the street, then suddenly jump back before a car (s)he wasn’t even looking at comes by . That’s basically a Spidey Sense!
In that example, headphones could act not only as speakers, but as microphones that simultaneously listen to the world around the person. If they identified the sound of traffic getting louder and louder and determined the person is walking towards it, perhaps the music would fade out to let the real world sounds in. Maybe a voice could give a warning (you are approaching a busy street), or even say what to do next (Stop and take 2 steps back!).
Imagine seeing that same person from before walk right into the street, then suddenly jump back before a car (s)he wasn’t even looking at comes by. That’s basically a Spidey Sense!
The audio interface isn’t always the answer — maybe some users prefer a less intrusive warning like a haptic buzz of a watch. Or maybe they would want the information visually displayed around them where they can see and read it. That one sounds like augmented reality, doesn’t it? The truth is that everything I am describing is AR. Conveying information to users natively across their senses is truly augmenting reality. It’s not just display of notifications, recommendations, and content — the entire sensing system that understood the world and user well enough to make those recommendations in the first place is also a part of augmented reality. Put another way, AR is not just the display of information or interactive content, it is about sensing the world, understanding it, and developing context to determine what information is relevant in what ways, what recommendations and actions should be performed, and how to best convey it to the user. In academic circles, this process of sensing and understanding the world (and user) is known as Context Aware Computing, and if you think about AR this way, you realize a headset display is only one part of the system. Clay Bavor, Google’s VP of Virtual and Augmented Reality, expresses similar sentiment in this tweet:
Developing context allows technology to not only collect data and information, but to go beyond that to generate insights and recommendations and even perform certain actions.
Data — > Information — > Insights — > Recommendations — > Actions
Right now, humans are doing all the work and telling the technology what actions to perform. With context aware augmented reality, the technology could perform all the steps necessary to then take actions like call 911 if it determined you had been struck by a vehicle (through a mix of sound and IMU measurements), broadcast the vitals being measured (smartwatch with cellular connection) to an incoming ambulance, and talk to you to calm you down or collect important information. No headset required.
Sure there are lots of issues with this — the invasion of privacy when headphones are always listening, the annoyance of an overly eager Siri interrupting your music every time you safely walk by a car at a crosswalk, the trust required for a human to act on alerts from a computer without verifying for themselves what to do or why, the battery drain of an always on-device, synchronization of devices, the on-demand cloud computing requirements, etc. — but that is not the point. The point is that when you think of augmented reality as part of context aware computing, it helps unlock a whole new world of potential for the medium, and puts the emphasis back on user experience and product design instead of technology.
Since context aware computing relies on synthesizing information from a suite of sensors and devices, there will be an added benefit to devices that can talk to each other and share information freely. Will an open standard emerge, or will major tech companies create incentives to buy into their entire ecosystem for fully synthesized situational awareness, and an optimal AR experience? In follow up posts, I will go into this in more detail, giving examples and describing what I see as emerging strategy among the major players.
—
Neil Gupta
Venture Partner, Indicator Ventures