How to make a simple webVR experience for kids with A-Frame and Blender — The making of halloVReen.

Jorge Fuentes
21 min readNov 6, 2018

This article assumes you have a basic understanding of 3D graphics, javascript and how A-Frame and Blender work. Also keep in mind webVR is a technology that’s continually evolving, so this article may become obsolete in the near future.

There were only ten days left for Halloween and I thought, why not make a webVR experience with A-Frame and Blender to celebrate? I’d need at least a couple days to spread the word and promote it as well, that left me with a week maximum. Was it possible without going full-on Rockstar style crunch mode? Turns out it is, somehow, if you know how to pick your fights. Here’s how I did it, hoping it’s useful for anyone getting started in developing VR experiences for the browser. I’ll try to focus on the “craft” side of it, though, not the technical one, since I think it’s more interesting and the experience really couldn’t be simpler in technical terms.

Before we begin…

Have you tried it? Go to in the VR headset of your choice and click the cardboard icon in the bottom right corner of the viewport.

First things first

In order to make an interactive webVR experience you obviously need to think of an idea first… and be mindful of its scope and your own skills. These elements depend on each other and sometimes even drive the creativity, as limitations usually do. Also, be ready to adapt and cut stuff out if need be when knee-deep in production.

I’m an art director and multimedia designer by trade, which means I’m a jack of all trades and master of none. I’m not a professional 3D artist nor an accomplished programmer and my experience with A-Frame is limited, so I set a couple limitations from the get-go to maximize my production time:

  • Most, if not all, of base models and music/sounds would have to be Creative Commons sourced. Optimizations and touch-ups will be required, but I’ll go much faster starting from “good enough”, already created meshes and sounds. Luckily it’s 2018 and the open source community is enormous.
  • No complex interactions, keep it simple. Let’s go with gaze-based interactions only: you look at a model, said model does something, that’s it. This has a nice side effect — it’ll be compatible with every VR headset, even Google Cardboard… and a nasty one: get prepared to optimize. Seeing its fast adoption and good sales numbers, I chose the Oculus Go Headset as my target device (thanks to old friend Diego Nieto for lending me one to do the testing!).
  • I can’t do this all on my own if I want a decent outcome in such short time. Enter another good old friend, artist Julio Iglesias (not the singer!), who was kind enough to lend a hand with graphic needs. He created pretty much every hand-painted texture in the project, optimized models, and animated the werewolf.

After some non-fitting ideas I decided to make a kids oriented experience with a light, tongue-in-cheek attitude. Some kind of a pop-up/interactive book for kids in VR, where they can immerse in a scene where there’s stuff happening. This allows a non-realistic texturing style (or even simply flat shading), fits a low poly aesthetic well, and is fun to make. Plus, a bit of humor and goofiness will help hide the not-so-advanced technical side of the project.

Find appropriate models and sounds

It’s just amazing how much stuff is shared in Sketchfab as Creative Commons downloads. And what about Freesound? Such an invaluable resource for this kind of project.

The first and most important thing was finding a fitting environment. Environments take time, even in low poly styles where you can get away with a scaled and skewed cube many times, because you need a believable theme gluing it all together. No proper environment = no project. Luckily Sketchfab’s user “jakekieferwaddington” had made just what I needed . Nice! It’s clear now: this will be “Creepville”, where its “creepyzens” are scattered around doing their creepy thing.

Check “Downloadable” in the filters and off you go.

From that point on, it was just a matter of brainstorming for a little while, thinking about which characters should be there and what the event they’d perform when gazed upon would be, with which kind of sound. This was resolved within just a couple of hours spent searching after thinking about which characters could fit well the theme (it’s Halloween, so the classic stuff works… pumpkin, skeleton, vampire, witch…). I really can’t stress this enough — it’s mind blowing how many people are contributing their fine work for the public to build upon. I found 90% of everything we’d need at this stage, which eliminates a lot of stress in later stages — you don’t want to be deep in production and realize you need to model and texture a new mesh, simple as it may be.


This is a small, one week project, so… no code repository, no node.js, no webpack: I simply got A-Frame, created a new html file with my editor (I use VSCode), fired Blender up to start checking my assets out… and started coding. You’ll also need a local server running since you’ll be loading external resources, due to the browsers’ same origin policy security restrictions. I use my already configured XAMPP install for my general webdev work. If you don’t have this and don’t want to spend time configuring, Mongoose works fine too for localhost stuff, but in my tests I wasn’t able to access my dev machine via ip, which is essential to test in other devices.

Layout, rigging and animation

Start by testing if your scene works in VR without textures.


In such short time available, it’s crucial to get your priorities right. In this case, it was crucial to get the bulk of the experience done ASAP. This means getting a close to final scene layout as fast as possible, see if it works, and only then enter main production work (texturing for example). This has many advantages: you won’t be optimizing early on (i.e. deleting polygons you might need later) but at a point where you exactly know what you don’t need, you’ll see if what you have in mind has potential and will really work and you will be able to focus on the bigger picture instead of obsessing over details like style. Simply put all your models without textures in the scene and start iterating, or at the very minimum just your environment, since it will be the centerpiece that glues together everything else… and the one with the most geometry, which will need the most aggressive optimization later.


Basic animation is not the easiest/most practical thing to do in webVR due to the lack of tools, and character animation is out of the question. Luckily, we can simply rig and animate in Blender and then use Don McCurdy’s great animation-mixer component.

Shake those bones!

Projects usually need you to be organized in how you name your objects, bones, etc. I didn’t have time to waste here and once I finished animating a model it was pretty much final and not revisited again, so I didn’t obsess over this. Don’t do this in bigger projects where you work in a bigger team! Same with weight painting: do try your bones and see if they move your mesh as expected, but don’t despair if it’s not perfect. No one will notice.

As for the rigging process, I kept it as simple as needed and in some cases I even avoided using bones at all if I could just animate the mesh as a whole. The experience was meant to be cartoony, and that gave me a lot of freedom on how I approached animation. Case in point: the vampire, one of my favorite “events” in the experience, only uses a simple rotation for the whole mesh. Super effective, fun, and quick. For the skeleton and zombie I used Mixamo, which allows you to very easily attach a rig and pre-made motion capture animation to a human-like mesh. It was a bit finnicky sometimes but using either their auto-rig capability or my Blender-created rigs it ended up working just fine.

Hello darkness, my old friend…


Exporting animation from Blender to use in webVR is easy enough if you know what you’re doing and bake complex stuff out (constraints, Inverse Kinematics…), but even then it can backfire quickly. When you start adding keyframes to an armature in Blender, it will create a new “Action”, which is like a group of keyframes tied to arbitrary elements (armatures, meshes, etc), and will automatically link this action to whatever is selected when you started keyframing. This means you can actually link both a mesh and its armature to an action. And weird stuff can begin happening. So, if you’re animating an armature, make sure the mesh is free of any action.

Also make sure that in the action editor (Dope Sheet mode) you:

  1. Name your action in a sensible way (you’ll be accessing it through this name in code).
  2. Click the “F” button next to the name so you create a fake user for the animation. This will make sure you don’t lose it in case you unlink it and close the file.
  3. (optional) click on “Stash” a bit further to the right. This will effectively stash the animation for a later use. You really don’t need this, but it’s good practice to stash after you’re finished with an animation and will be useful if for whatever reason you later want to use the NLA editor.

This all sounds fine and dandy, but animation was actually one of the biggest pain points when exporting from Blender and a good number of hours were frustratingly wasted dealing with it. The best example of this is that you need to bake everything that makes use of constraints, and this can lead to errors. Let’s take the flying witch as an example: she follows a spline path in her “animation” action while oscillating up and down. This means you have to both animate the witch’s mesh and make it follow a curve.

Hello darkness, my old friend... part 2: The Action Baking!

In order to bake this, you need to select the object in 3D View mode and select Object -> Animation -> Bake Action in the menu bar, and then select every checkbox (barring the first in case more objects take part in your action). Make sure you keep a copy of the unbaked animated objects in case something goes wrong. For some reason though, the export after baking the witch's animation produced very weird results. After many, many tests I replaced the bézier path for a NURBS one and it finally worked. This fix however didn't work for the bat flying out the tree, which again after much wasted time I ended up just animating by hand in a simpler manner. I'm not a pro animator and don't deeply know how Blender works in this regard, so very possibly I did something wrong there. The bottomline here is, though: if you need the time and something doesn't work as expected, don't waste your time trying to make it work if it isn't essential. Just do it in a different manner that works. No one knows what your original vision was, and probably wouldn't notice the difference anyway.

Other tips regarding animation are the usual ones: if you’re aiming for cartoonish movements, make your animations expressive, short and snappy. Check your keyframes type and easing: natural movements do not usually follow a linear interpolation (in Blender you can set this in your graph editor: select some keyframes and hit T for interpolation type, CTRL + E for easing type).

Your mileage WILL vary, but these are good starting points.

Don't obsess over little details if the object is little and/or far away and you're in a hurry: keep your scope in mind at all times!


Dennis Muren (ILM Visual Effects Supervisor for Terminator 2) said something in an interview that really resonated with me in terms of approaching lighting. It was something along the lines of “Nowadays people just bring every single light from an in-location HDR session to a scene and then begin to remove whatever they don’t need. I find that it’s better to begin with no lights and begin adding only what you need”. I guess it depends on the scene at hand, but in general I fully agree with him. It works wonders for webVR experiences as well, since you want as less lights as possible to keep performance in reasonable terms.

Sometimes you just need one light.

It was super easy to light halloVReen. It’s just a single point light in the middle of the scene with a mid purple tint and a carefully tuned distance falloff. The tint gives us a proper Halloween ambiance, and the falloff helps us obscure the upper parts of the scene, giving it a more mysterious look.

Materials and textures

For this project we’re using glTF as delivery format for the models. It is fully supported by three.js, the underlying engine of A-Frame, and supports everything we need. Materials in glTF are PBR by default, following the Metallic-Roughness workflow. In Blender, you’ll need Khronos Group’s Blender glTF 2.0 Exporter to generate your files in this format. You also need to get their PBR node, which you append to your Blender file and use as starting point for all your materials. Note that the node is made for Cycles, and that even if you switch to Blender render and set up your materials there, the exporter will convert them to PBR materials!

Make sure you append Khronos Group’s PBR node to your .blend file.

glTF is an awesome, modern format that supports many features. When it comes to materials you need to be careful though, since PBR materials are more costly to render than your basic flat or lambert ones and can easily bring performance down on their own. It’s easy to get carried away and add normal and metallic/roughness maps for every model, but remember the target device here is an Oculus Go, which is basically a mobile device with a 2016 SoC. Skinned meshes with animation also need their rendering time and those are essential to the experience, more so than materials. So, first step to keep performance under control: only use the base color node input (for texture maps) or the base color factor color picker (for plain colors). We won’t be missing normal mapping since we’re after a cartoony look anyway, and will use a 100% roughness factor.

While I’d usually settle first on the art direction / texture style and only then move to rigging / animation, for this project the opposite worked better. Again, there was little time to get this done, so I needed as much flexibility as possible. Leaving the texturing work for the end allows you to tweak performance on the go and gain precious time to think about how the experience should look while you block the layout. Not less important: when only the texturing is left, you precisely know how much time is left as well, so you don’t waste time with unneeded refining.

I didn’t want to add any UI element to the experience that let the user know which objects were interactive since that would break immersion a bit and anyway I wanted this to be a little bit of a hide and seek game (it’s not like the objects are too hidden but hey, it’s for kids!). A better way to do this is to separate the environment from the characters visually. This works well and brings a nice side effect: No need to texture the environment! Not texturing it will help separate both elements further and save a lot of time. Interactive objects on the other hand will need a good texture set. For the kind of look I were after, nothing beats a hand painted texture. Julio put his expertise at work and created 99% of the textures from scratch. They were pretty much perfect from the beginning, so one less thing to worry about!

Vampire in a (512x512) box.

The “only one point light with a short distance falloff” lighting technique also allowed us to simply apply a black material to the werewolf and witch since they were up there in the dark and wouldn’t be visible anyway. Two less texture sets to worry about! And finally, I tweaked the emissive factor for the interactive objects by plugging the base color texture into the “Emissive” channel and changing the “Emissive Factor”, so I could further separate them from the background and effectively remove a bit of the lighting influence on them.


The code was written in parallel to all the graphic stuff. It’s really super simple and I’m pretty sure it could be way more elegant but hey, I’m not a professional programmer and time was of the essence! Each interactive object only does one of two things:

  • Fire an animation when gazed upon (actually every object barring the pumpkin), or
  • Swap itself with another object when gazed upon (yep… the pumpkin)

This is handled by two separate components, animation-control and swap-object.

Here are the relevant parts of animation-control:

I told you it was simple! This component gets the info from the animation-mixer attribute which is set in the HTML and sets the general properties of our interactive objects' animations: Which clip will play on load (this is the name we set in the Action in Blender earlier, remember?), if it will loop and how many times, or the crossfade duration between animations (super cool feature). Remember, you need to get the animation-mixer component for this to work.

So back to my component, this is what it does:

  1. Store the initial animation setup in oldAnimationData so we can reinstate it later.
  2. Add an event listener for click in the current element that will play the animation.
  3. Add an event listener for animation-finished (this is an event that gets emitted by the animation-mixer component) that will leave things as they were before.
  4. On click, check if the animation is currently playing and just return if it is.

You can see that this component gets a target (type: selector) as a property, which targets the element which will be animated. I do this instead of simply targeting the element the component is attached to so that we can put an arbitrary hotspot (a big cube for example) anywhere in the scene, have that receive the click event and then fire the animation in any other element. This is very important for usability in smaller or far away objects where it'd be difficult to precisely place the cursor on.

Here are the relevant parts of swap-object:

Not much to explain here, I simply switch the visible property on a click event and listen to the animation-finished event on the swap object to swap back to the original object. Just a little trick here: to help minimize the initial render time of the swap object (which would make the whole scene pause for some milliseconds), I put it in the DOM on the initial load as every other object, just below the ground. Then when it is required I simply reset its position to (0 0 0).

For sound management, I used Howler together with Audiosprite. This allows a neat way of packing every sound together in an audio file and access any of them through a JSON index, with cross-browser compatibility. Here’s how it looks:


Problems and workarounds

Look: I love A-Frame, webVR and its possibilities, but developing for it is a pain. You’re on your own for many things since there really is no proper tooling and the community is still small. Things will break and behave unexpectedly, specially when you still don’t have your pipeline and workflow down. We need better tools if we want it to be more widely adopted by artists. We need authoring tools, and we need faster, more convenient preview tools (A-Frame built-in inspector tool is a great start, and the glTF extension for VSCode is of great help as well). The future is starting to look better, with Mozilla funding development for a better and Blender 2.8 compatible glTF exporter. But there’s still a long way to go when compared to native VR.

Anyway, here are some problems and their workarounds:

Hello darkness, my old friend… part 3: WHY ME?!?!

My .OBJ is rendered as wireframe

At some point, I tried to export the environment as a simple .obj so I could use basic materials (remember, glTF only does PBR), hoping to further improve framerate. But the model rendered as wireframe. After some tests, it looked like there was some loose point(s) in the geometry that was throwing the rest of the mesh off. Blender’s “select similar” and “select all by trait” didn’t help. And so began the very tedious process of deleting most of the geometry, re-exporting, testing, undo some of the deleting, re-exporting, etc, so I could nail down where the offending vertices were. This is VERY time consuming, it is a mistake looking into it unless you really really need it and have the time. My workaround after more than an hour lost was to simply revert to glTF, which rendered as intended from the beginning. D’oh!

My model either stopped rendering or is super glitched after I re-exported it

For some reason, browser caching for webVR apps assets seems more aggresive than your regular images and html files. This is a very hard to find bug until you realize what’s causing it and can cause you many headaches and lost time. So: remember to clean your cache everytime you re-export your assets. The Clear Cache addon for Firefox works well. Luckily, the Oculus browser allows you to clear browsing data (The option is in its right bar). Firefox Reality doesn’t have this capability at the moment, sadly. I also tried implementing the typical cache busting technique of generating a random number in PHP and loading assets with a query parameter that included this number, but oddly didn’t work.

I animated a character with IK but glTF is ignoring it

You need to check the “Bake skinning constraints” option in the glTF exporter. For some reason this is unchecked by default and it’s not entirely obvious what it refers to.

Hello dark… OH COME ON!

My alphas render weird, making objects behind them transparent in the areas overlapped by my transparent texture

This is a pain to get right. There is no easy way to correct this and depends on your scene. It’s caused by the drawing order in webGL. I don’t know the specifics of how three.js handles this but luckily found an (elaborate) way to force the drawing order as I needed it, more or less. You basically need to separate your transparent objects into groups, from the farther to the closer ones to camera. In halloVReen’s case, all of the trees that surround the environment are billboards with transparency, so they were arranged in three distinct groups or rings: outer, middle, inner. Then, you create your entities for A-Frame in the order you want them to paint:

This is half the solution, since there will be planes intersecting each other in each of the groups. You could separate the offending objects in different glTF files and force their position in HTML as above, but this is impractical for many reasons. You need to force the internal order of objects (connected geometry) creation directly in Blender, so that the glTF export respects this order when writing every vertex position. Do note that the whole ring of billboarded trees is just one object. This is needed so the draw calls don’t go through the roof murdering framerate on its way. Here’s the workaround that worked for me for the most part. Still some errors here and there that thankfully are not that noticeable:

  1. Enter edit mode, select the geometry that should be painted in front of the rest, separate it (hit p, choose selection).
  2. Exit edit mode. Back in object mode, select this newly separated object and duplicate it with shift + d. Move it a little bit so you can select the original object and delete it.
  3. Select the object that contains the rest of the geometry (ring of trees), then this newly separated and duplicated object (lone tree billboard that should paint in front of the rest) so that the latter is the active object, and join them back into an object with ctrl + j.

It’s elaborate, but it seems to work. If you have a better workaround, I’m all ears!


So now that everything’s in its place, you need to optimize, specially if like me you are targeting a mobile device. Luckily we blocked the layout very soon in production, so Julio could optimize the environment early. This let us have a rough idea of how performance would be in the final product and keep a nice balance. Keep in mind we’re developing for A-Frame, which is an abstraction built on top of three.js, which is built on javascript, which communicates with webGL, which is built on top of a browser. This overhead keeps a chunk of power out of reach for us, so we need to make the most of what we have. Luckily A-Frame/three.js performs a pretty aggressive form of frustum culling that helps quite a bit in this.

I wonder why this slows down to a crawl in Oculus Go?

Here are some tips:

  • Reduce your polygon count. An Oculus Rift can deal with an A-Frame scene of +100k triangles. Try and keep this number under 50k for the Oculus Go and similar headsets. Delete every polygon you won’t be able to see in your experience, optimize your assets, remove non-essential geometry. In halloVReen we went from 100k in the environment village to 20k in less than a couple hours with some aggressive optimization.
  • Keep your textures number down and your materials simple. Fewer textures take up less video ram. Keep normal and metallic/roughness maps out. If you absolutely need them, limit them to the most prominent object in your scene and downscale them if necessary. Normal maps and big textures can tank your framerate on a whim.
  • Keep your draw calls as low as possible. Join as many objects as possible into one in Blender (ctrl + j). Fewer objects = fewer draw calls = more performance. I was wondering why performance was so bad in my first tests until I noticed I had +150 draw calls due to every object being separated in my Blender file!
  • Billboard as much as you can. These tips might sound super obvious to proper 3D artists, specially this one. But I can’t stress enough how much you can optimize with simple billboards without breaking immersion. Specially in this case where it’s a cartoony setting: your brain won’t care if the trees don’t have any depth. So billboard as much as you can!

Finishing touches

In the night of the 6th day I showed the work in progress to Diego F. Goberna, another good old friend, who said “Man, you need a score counter”. I fully agreed, but I was on full-crunch mode. No time. He was kind enough to put it together super quickly, which I implemented the next day. Nice! This was a great touch that would bring the experience together, it gave it purpose.

I also wanted the experience to be as atmospheric as possible. I mean, it’s Halloween… but it’s difficult to create atmosphere with the technical limitations of webVR. I wanted fog of some kind, preferably the kind that moves, and glowing pumpkin eyes, and lit torches with fire particles coming out of them, and… then… I realized 7 days had already passed. I managed to somehow avoid the +12 daily hours crunch for five days, but the last two were inescapable, and exhaustion was beginning to set in, I’m not a teenager anymore. So for atmosphere, I simply used A-Frame’s built in fog component. It was good enough for Silent Hill, right? The puzzle was finally complete.

All’s well that ends well.


So there you go, that’s how I made halloVReen with a little help from my friends. It never ceases to surprise me how such a simple experience can take so much time and effort to complete. Working in 3D everything takes so. much. longer. I had to cut some stuff in order to meet my self imposed deadline. For example, the central tower would have its door and windows flap and laugh maniacally when looked upon, as an homage to The Evil Dead movies. And the bat flying off the tree was going to actually be a colony of 3–4 bats with a more intricate path, but the component didn’t take multiple targets into account and I was at the end of the crunch, so no time to even think about implementing it. Also let’s not talk about the graphic design (my area of expertise!) aspect of it. What was a placeholder became final as it usually happens with these things. Man, those logo and text layouts... ugh! Oh well! Next time will be better, right?

Anyway, it was received well, and was even featured in Supermedium, Firefox Reality and the Oculus browser. Right on!

What will my next incursion in webVR be? Already thinking about it… stay tuned!

You can check the source code, credits and thanks for halloVReen in GitHub. Do what you want with it!

Have a cool project in mind? I’m currently looking for work. Check more stuff at my website.