VR positional tracking experiments with AR.js + A-Frame (I). Single marker scene.
In this series of articles I will explain my efforts in a reliable movement tracking system using AR.js in conjunction with A-Frame for a “cardboard headset”. The main purpose is to get a realistic depth of field effect (near/far) following the user movement around the phisical environment, using the cheapest VR headset. Of course, it is compatible with the most “pro” headsets.
A-Frame is, according to aframe.io, “a web framework for building virtual reality (VR) experiences. A-Frame is based on top of HTML, making it simple to get started. But A-Frame is not just a 3D scene graph or a markup language; the core is a powerful entity-component framework that provides a declarative, extensible, and composable structure to three.js”.
On the other hand, AR.js is “a lightweight library for Augmented Reality on the Web, coming with features like Image Tracking, Location based AR and Marker tracking”.
The conjunction of both javascript libraries is simply magic, because it allows to create easily VR-AR experiences with a few lines of code, can be shown in a lot of devices (most of mobile phones for cardboard headset) and do not need to install anything. Just enter in a website.
Single marker-based tracking in a glTF scene
AR.js provides a easy way to track markers and “mount” all kind of a-frame entities linked to its position, including glTF models. I asked myself why not to use an entire scene (created, for example, in Blender) using A-frame for a full immersive experience (not only head tracking) instead of the simple model. I take a step ahead and tell you that it is not so simple, but lets try.
This is my first proof of concept based on examples of AR.js, replacing the tree model with a very simple room with objects, exported to glTF format. In this example we use the kanji marker image.
The shaky effect is caused by a not-so-good calibration of smooth in the code, but this is a problem that we will solve in the future with a trick. As you can see, I can move myself on the small scene and get closer to different entities. The glTF is smaller in this test because a small problem we will see later.
Source code
<a-scene><a-assets> <a-asset-item id=”habitacion” src=”blender-scene.gltf”></a-asset-item> </a-assets><a-marker type=”pattern” url=”kanji.patt”><a-entity position=”0 0 0" rotation=”0 90 90" scale=”0.6 0.6 0.6" smooth=”true” smoothCount=”2" smoothTolerance=”.9" smoothThreshold=”12" gltf-model=”#habitacion” ></a-entity> </a-marker> <a-entity camera></a-entity> </a-scene>
Conclusion
Simple, huh? The first think I learnt was very obvious, but when the camera losts the kanji marker, obviously the entire scene disappears.
If the scene is too large, the marker will be fully eclipsed by the AR scene, and easily we will lost the marker trying to “run” in the virtual scene. That is the reason why I wrote a triple “0.6” scale for scene in the code.
It is very easy in a outdoor environment to set up virtual walls in a fixed place using the gps-function of AR.js (we will talk about it in next chapters) but I think that in small indoor places the marker solution is fast, cheap and functional. And there is a solution that I will try in this article.
I think that this problem should not be a absolute obstacle, because is possible to place several markers on surfaces of a room with strategic distance, making appearing the camera-focused places even making it more realistic.
That is what I will show in the next chapter. See you later!