AR Experiments

Active Theory
Active Theory Case Studies
2 min readJul 7, 2017

Since Google’s 2017 keynote we’ve been thinking about Augmented Reality and how it fits with the future of the web. As a new paradigm it is without a doubt going to change how people interact with technology and like VR, the fundamental concept of AR is interacting with 3D graphics which is our primary passion. After trying the impressive Google Tango and the announcement of ARKit, we decided to dive right in.

While we absolutely look forward to WebAR reaching mainstream browsers and believe that the web is the best platform for people to discover and interact with AR content, there is always a period of time before a new standard is designed and implemented.

In order to start developing Augmented Reality content, specifically for ARKit, we started with our core philosophy to take the web outside the browser. We extended our native app platform by utilizing JavaScriptCore and OpenGL to create a JavaScript runtime in a native iOS app where the WebGL specification is bound natively to OpenGL.

The main objective is to be able to reach ARKit as another deployment target from our internal JavaScript framework, Hydra, which has evolved over the past few years into a toolset and workflow that enables us to create engaging and performant 3D experiences on the web.

With Hydra and Three.js, our 3D engine of choice, up and running in this environment we held an internal hack day to experiment with AR and create content to explore the new medium.

Inspired by our newest piece of office bling, the very first test was to get a 3D model placed in space.

Next, we began experimenting with using our effects pipeline in AR to blend graphics into the environment.

Continuing with interaction, the user springs virtual content into existence.

Moving into the abstract, it was fun to see things that we have previously confined to a browser exist in the real world.

Expanding on that concept, we started to look into how the camera could be used as a mechanism for revealing new layers.

That carried over to using only the camera feed to create the illusion of a shape and depth.

Still thinking about interaction with the camera, we created an illusion that takes advantage of the unique aspects of AR.

Bringing it back to tangible objects and use of space, we toyed with manipulating the camera feed to still keep the user contextually in their environment but modify it in order to create a specific mood.

After this exercise, we’re all in on AR. We’ve developed the same architecture on Android so these experiments can be used with Tango/Daydream as is. We believe AR will be a huge part of the web and are ready to embrace the future.

--

--