Here at Niantic we are incredibly excited about the transformative power of augmented reality. Why? To populate the world with cute and cuddly 3D characters? Well, yeah, we think that’s a cool idea. But something even bigger than that is at stake — whether we build technology that serves us or increasingly become servants to technology. Augmented reality (AR) and virtual reality (VR) are at two ends of the spectrum of how we are going to relate to technology in the future. Not to overstate the case, but there is a part of me that recoils at ‘The Matrix’-like aspect of VR as opposed to AR — a vision of humans as peripherals plugged into some vast digital matrix, emerging only to ingest a quick glass of Soylent before we dive back in.
AR offers the potential to enhance our basic functions as human beings as we lead our lives in the physical world. Yes, AR can transform the mundane into something more colorful and fun and can provide a useful nudge to go off and see new places and do new things. It can also enhance the everyday — walking through a complicated subway terminal (imagine a dotted line map leading the way), shopping (imagine a glance at an item showing you an image of you wearing the item with information about where it was made and its ecological impact), travel (picture heads-up translation and guides to history and art as you stroll through an historic site), and so much more. The real win will be lifting our heads from our phones without needing to completely cut the cord to the rich troves of online information that can enhance our lives in a million different ways.
So…. did we just see the future today? Yes and no. AR on phones is a very important step on the path to full AR. But it’s a step that should be understood as one with limitations in its current form factor and level of development. Given the number of acronyms out there and the general confusion over this new technology, it’s probably worthwhile to define what we (at Niantic) mean by ‘augmented reality.’ Many will interpret AR to mean merely the visual effect that you experience with a device where a digital object or annotation is overlaid on the camera view on the screen.
But that’s really missing the point, in our opinion. What AR really means is connecting digital information, objects and experiences with the physical world in situ as you experience them. It’s the part about connecting information to the world that’s important. The way that information gets to you is secondary. What I mean is… AR can be something like we see in the movie ‘Her.’ It could be a whisper in your ear telling you the path to walk or historical facts about the building in front of you. We actually experimented with this type of AR with Field Trip, our first app at Niantic. AR can also be information presented on your phone that is connected to something near you even if it’s not using a fancy 3D AR camera view. Niantic’s first AR game ‘Ingress’ doesn’t have an AR camera view in it and yet millions of players around the world experience a shared alternate reality where mysterious ‘XM’ energy flows from statues, fountains and historical sites around the world and a global competition between the ‘Enlightened’ and the ‘Resistance’ emerged (we’ll be launching a new version of Ingress later this year).
The point is that the AR camera view is a cool step forward, but it’s only part of what is going to make AR so important and powerful. Mastering digital overlays on phones today utilizes exactly the same technology stack that we will need to power the AR glasses of the future. It’s an important stepping stone and that’s why Niantic is committed to fully exploiting that technology on today’s devices. When used correctly, it can be a powerful way to enhance your experience with the physical world. But apps that merely place a digital object on your kitchen table don’t really qualify as ‘AR’ in our view. Even when used out in the world in the ‘right way,’ AR suffers from a challenging form factor when accessed via a phone. Holding a phone in front of you to align an AR view is, honestly, a little awkward. Based on experiences with apps that are mostly focused on this visual aspect of AR, some will conclude that AR is a gimmick that lacks real utility. That’s a bummer, because it really is the first step to something that is going to transform the world as we know it.
Glasses are coming. They are hard and it will take a while but we will get them and once we do, we won’t go back. Google’s Glass provided a glimpse, however flawed, of what that future might look like. Were there shortcomings? Many. It’s worth noting though that beyond issues with limited functionality and performance some of the most pointed criticisms raised about Glass weren’t so much about the device itself but rather the societal implications of pervasive cameras and of people constantly immersed in a screen. Those are challenges we already have with today’s phones and ones that we’ll need to address for future AR devices. But in our view the ultimate utility of being connected to all of the information about the world — while you are interacting with the world — will prevail.
When that happens, we will live in a world where everything we see can be interactive. Imagine buildings, offices, homes, cities and transportation with live, dynamic interfaces customized to you and what you want to do. The billions of dollars a year that we spend on physical signs, directories, schedules, and all of the other ‘UI’ that we need to navigate the physical world won’t be needed and will be replaced with digital overlays with far greater functionality. And yes, colorful animated creatures can inhabit our backyards and parks, waiting to be discovered. Games beyond anything we can imagine today will be played out. Not by humans wired into Matrix-style pods, but by human beings walking, running, exploring, talking and connecting in the real world.
So yeah, it was a big day and we are super excited about what it means for the future. There may be some potholes along the way, but stick with it. This is a ride you won’t want to miss.