In spite, of the release of VR hardware like the Oculus in March or the Vive in April, the single most important development in the past year since our previous experimentations has been the officiation, support and adoption of the draft WebVR specification pioneered by Mozilla (now at version 1.1).
The primary intent of WebVR is to enable distribution and consumption of VR content to end-users without extra platforms, as well as creation of VR content and experiences without significant re-tooling or retraining of developers that use existing web development technologies like HTML or JS.
The spec so far supports most types of VR hardware connectivity that exist on the market including headsets, motion tracking controllers, full-body trackers, and other n-degrees of freedom input devices. Microsoft recently announced full support for WebVR in its Edge browser, becoming the latest in a line of browsers that already have full or partial support.
As with any draft API the consequence of coding with something that might change is that your creations may no longer work in the future. AFrame is a powerful entity-component-system (ECS) framework that takes care of much of the boilerplate code involved in working with the lower layers of WebVR. Essentially it provides a structured facade pattern to insulate content developers from the mechanics of graphics programming; making use of WebVR polyfills and THREE.js.
While Mozilla was the first to put out a cohesive and useful solution in this space, it is not alone. Most recently, announcement of a similar VR tooling for React only further validates the benefit of a higher-level ECS-type framework focused on WebVR content creation. AFrame enjoys strong community adoption, frequent contributions, and vast pools of user-contributed components. AFrame is built expressly for web developers using web paradigms (source code open to the world) and web technologies (DOM APIs).
Playing around with it to gauge its potential has been exceptionally easy so far. It’s been a dream: especially with the scene inspector added in the latest version (0.3.0).
As with all things fresh that seem neat and tidy, there are a few caveats I’ve encountered. Here are some of the lower level bits which leaked out of the AFrame abstraction model:
- THREE.JS won’t load models with faces that aren’t triangulated
The solution here is to triangulate the model’s faces if you’re trying to import a 3D model with any faces that aren’t triangles.
- Including the AFrame <script> tag after the <body> tag produces undefined behavior
Sometimes the functionality inside your AFrame scene will remain functional, sometimes not. Always include AFrame inside the <head> tag to avoid undefined behavior.
- Cursor component does not filter scene elements like the raycaster component on which it is built
If you filter what objects in your scene are interactive through the raycaster component on a cursor entity, you will find that the option is ignored. Unfortunately, the answer here is to revert to using a raycaster component or to build your own cursor component or use one written by someone else.
- Some DOM API methods don’t work
One of several key paradigms that AFrame tries to establish is that the scene graph is 1-to-1 mutable through a hierarchy of DOM elements rooted inside the <a-scene/> tag. This structure loosens when certain DOM APIs don’t function as expected. So far I’ve run into two such examples.
/* 1. appendChild() doesn't work as expected */var anotherEntityElement = document.createElement('a-entity');
anotherEntityElement.className = 'red';
HTMLElement.appendChild(anotherEntityElement); // nopevar entityHTML = '<a-entity ...></a-entity>';
HTMLElement.insertAdjacentHTML(entityHTML); // yep/* 2. remove() doesn't work as expected */var sphereElement = document.querySelector('a-sphere');
sphereElement.remove(); // nope
sphereElement.parentNode.removeChild(sphereElement); // yep
Experimentation with the poor man’s combination of inputs on less-than-ideal devices (ex: a second, older phone) reveals a host of other challenges to overcome. For instance, there’s a tendency for the accelerometer and gyroscope inputs to “drift”, which is to say that the sensors slowly lose their zero (initial reference points).
With any actively growing technology, many of the finer details of its use are subject to change rapidly. The three most helpful resources have been the official AFrame Slack channel and documentation, followed closely by StackOverflow questions that have been tagged with ‘aframe’. Of course, it never hurts to dig into the code that developers more familiar with AFrame have written. I’ve learned countless tricks and new approaches to structuring things in AFrame that would have taken ages to trial-and-error by myself.
My current goal is to continue a deep dive into WebVR, testing the limits of what I can build with the AFrame framework. For all of its growing pains, it is leaps and bounds ahead of equivalent frameworks for WebVR. Once a thorough understanding of how its underlying parts work has been gained, core contributions to this technology become more realistic and immediate.
Meanwhile, my focus will be the issue of progressive enhancement for WebVR experiences, following a similar intention as the WebVR boilerplate project. Once the medium fully blows open the barriers to VR content distribution and participation, the next biggest issue is how it will need to support a variety of devices while maintaining the vision that designers aim for.
In upcoming posts, I’ll be covering some demos and tips on how to build a foundation for VR experiences that retain their core mechanics given the vast oceans of hardware and software fragmentation. Hold onto your cardboard headsets!