Image courtesy of Flickr user phn

Citizens! I have returned… from the future

For the past 2.5 years, I have been in the future. I have interacted with entirely virtual objects, of my own creation, that felt almost as real as anything else in the room. I laughed uncontrollably as my mind and senses tried to resolve on what they perceived as real. I experienced a sci-fi future of interacting with technology that didn’t require sitting at a keyboard or staring at a pane of glass. I have been to the future where technology isn’t “computers” or “phones” but rather something that simply exists in our surroundings.

I have worked in a world where this futuristic technology is more practical and accessible than conceptual. That access has allowed me to think deeply about how it could work as a integral part of our daily lives. I am now returning to the present to help people understand this future and help developers understand the technology that enables it.

How did I get here? I’ve been developing commercial software since 1999, and it wasn’t long before I realized video game development was the perfect combination of highly technical and highly creative work for me. From there, I worked on numerous interactive 3D projects. Usually these projects were on the cutting edge of computing and sensing technology. In 2014, I jumped into the time travel portal of Magic Leap. It’s a tiny little startup you may have heard of before.

Virtual reality (VR) and augmented reality (AR) technologies are getting a lot of attention right now. Early adopters are starting to get a glimpse into the new world of immersive computing without screens. People are transporting to different worlds and interacting with virtual 3D objects while remaining in their real world environments. In the future, this technology will be as pervasive and common as your smartphone and as easy to use as your television.

A primer and history of AR

For clarity, allow me to define the fundamentals of Augmented Reality (AR). At a minimum, it involves streaming a digital camera feed through computer-vision software that is looking for elements of a known pattern. If the pattern is found, the software then tracks its position and orientation relative to the camera. Using that information as a reference point, digital content (usually 3D graphics) is rendered in realtime for the user.

Don’t quite get it yet? More explanation and visuals ahead.

I think it was around 2006 when I first saw desktop AR developed with Flash, Papervision3D, and FLARToolkit. Looking at yourself in a monitor while holding a fiducial marker made the interaction really awkward, and this was a problem that desktop AR never fully managed to solve.

While some people did do some cool interactive things with desktop AR, it was all pretty much like this. This is a handsome gray haired AR developer, but it’s not me. I couldn’t find any more info about the developer to give attribution.

By 2008 or so, smartphones had the horsepower to run light computer-vision algorithms while rendering 3D graphics over a video stream. Most importantly, it was a portable form factor with a back-facing camera. This gave it the feel of using the device as a window into the augmented world. I became much more enthusiastic about the possibilities of mobile augmented reality, and I worked on a number of interesting projects with this technology stack.

This was state of the art 4 years ago when I was experimenting with using a base fiducial and coins to change the content.
My friend Bob Berkebile has always pushed AR tech to show it’s commercial potential. Notice no fiducial markers are used in this demo thanks to using additional sensors in combination with the device’s camera.

The past couple of years have brought significant advances in sensing and display technologies, which have made us less dependent on fiducial markers while rendering the digital content in such a way that it feels like it’s actually in your physical world. This difference is why many people are pushing the term Mixed Reality (MR) to differentiate it from traditional mobile AR. Be careful with this term, other people are also using MR to describe as a higher-level classification for all kinds of interesting stuff.

Many people would consider this projection mapping and hand tracking input as “Mixed Reality” (Hi again, Bob!)

What about VR?

I love modern-day virtual reality and all of the enthusiasm around it. If you follow the industry side of things, it’s inevitable you’ll hear the story of “I was super excited about VR in the early 90’s but it was too early.” That was actually a little before my time as a developer (my silver hair can be deceiving!) and until a couple years ago, I didn’t have much exposure to VR.

Just like AR, VR is also getting a boost from the improvements made in sensing and display technologies — specifically, the ability to accurately track the user’s head and hands at high speeds has been the game changer. There’s also more available VR hardware right now compared to AR, which means it’s a little ahead of the curve on becoming an accessible technology.

I’m going to talk more about the experience differences between VR and AR in another post. I believe they are complimentary technologies that may converge at some point, and many of the workflows and design considerations are the same.

I’m here to help

I know firsthand how powerful these new interfaces can be, and I’m incredibly excited that in the near future they will become accessible to most people. Because it’s inherently difficult to share or demonstrate VR and AR experiences without their associated hardware, both are particularly vulnerable to word-of-mouth endorsements right now. Bad experiences for users can lead to huge setbacks on adoption. The same is true for bad developer experience. For this reason, content development tools were a big part of my focus at Magic Leap.

The current developer workflow for creating experiences on VR and AR platforms is very similar. Jumping into development for either platform is relatively straightforward to anyone that has worked with modern 3D game engines like Unity or Unreal. For some platforms, it’s the only workflow available. That’s good news for game developers, but for anyone else it means the learning curve can be steep.

The other big challenge for developers is designing for these new interaction models. This is the closest we have been to having technology interact with the physical world and human biology. There are lots of amazing, downright magical things you can do. I have seen people swear they felt a virtual object with their bare hands. The old tried-and-true screen-based rules rarely apply here.

With these challenges in mind, I want to help people create and develop the best possible experiences for these new platforms.

Time travel requires an adjustment period. In the short term, I’m going to continue engaging with the developer community and possibly helping some people with their projects. Next week, I will be attending the Augmented World Expo. Eventually, my primary focus will be developing new tools and experiences to help push this new paradigm forward. If you have an interest or an opportunity (not really looking for employment at this time), please reach out.

My name is Paul Reynolds. This is the first of many thoughts I have and plan to share. If you’d like to be notified of those, please follow me here on Medium and Twitter. If you’d like to recommend and share this story, I would greatly appreciate it!