A virtual journey through sound, powered by A-Frame

David Robinson
Mozilla Tech
Published in
8 min readJun 7, 2017

Here at TheTin we run monthly Tinnovation sessions, a chance to explore and discuss new trends, tools and technologies. Often they lead to internal projects which act as a practical way to work with something new, outside the normal constraints of deadlines, and taking more risks than we normally would on client work. Our latest Tinnovation project, Band Explorer VR, is up and running — but how did we do it?

This is an abridged version of an original article, which you can find here.

Project inception

Following on from a Tinnovation focusing on VR we knew we wanted to have a go at building something for ourselves, but there had to be a reason for doing so. The underlying goal was to produce something in VR where the medium itself allowed for a solution with benefits over and above a more traditional 2D experience, not just for the sake of it. The end product should allow the user to do something faster, more easily or in a more engaging way.

We ended up going back to an idea we had previously looked at several years ago. We had wanted to explore connections within music, a bit like the BBC’s Comedy Connections, but for music.

Tech considerations

First thing, data. We love Spotify and use it everyday — it has a robust API, and already has related artists mapped. It also has the 30 second samples of the music we would need — a music exploration tool without music wouldn’t be much use!

They even had a visual representation of this in the form of an Artist Explorer, a nested tree diagram, in their developer showcase. It was built by one of Spotify’s own developers to help showcase the API, and all the code was available in a repo for forking.

So we had our API, and a codebase to start, but how were we going to build our VR experience?

Initially, we thought about using Unity, a mature platform with lots of built in VR capabilities, with cross-platform support from (almost) the one code base. It’s possible to code in C# or Unity Script (based on JavaScript), skills we have in house, but it’s not a tool we have extensive knowledge of working with. It’s also more focused on richer experiences than we would have the resource to invest in.

It took only 5 minutes looking at A-Frame to realise we had found our solution. A-Frame is built on top of three.js, but instead of coding every object by hand it has lots of primitive shapes built in. You don’t even need to create a scene, camera or renderer as these are setup automatically. It’s built around the traditional building blocks of a web page, DOM elements, extended with JavaScript. This makes it easier to keep the code and the layout separate. It incorporates an entity-component-system which allows for easy extensibility. It’s also got a great community of developers actively working together to build a wealth of ever expanding components and tools.

Starting to code

The first step was to pull down the repo from the Spotify Developer Showcase. At first I tried swapping the 2D elements in the original project for basic objects in a VR space. But it quickly became apparent we wanted to represent the data in a different way.

After a few days trying to bend the original project I realised I would also need my own helper files to load, filter, sort and group the artists for our needs, so I set about creating an artist controller.

Once I had the artists logging nicely to the console I set about getting something to actually appear in a VR scene. Thanks to the ease of working in A-Frame, it didn’t take long at all before I had artists appearing in rows, automatically arranging themselves based on the sorting within the artist controller. It was immensely satisfying to have something working, and it was time for our design function to really get involved.

Design research

The first thing that needed to be done was some fairly extensive research into the topic — and thankfully, there’s plenty of material out there for the newbie to get started with. As we discussed in a recent Tinnovation session on design trends, aesthetic standards for VR are still being figured out, but there are a few established rules to follow. Around the same time that Cardboard was launched, Google released a handy set of guidelines that addressed the most basic principles — always maintain head tracking, keep the user at a constant velocity when they’re moving in the app, don’t make too many brightness changes, and anchor the user to their environment were some of the key takeaways.

UX

Once we had a better understanding of what we were dealing with, it was time to start designing. But before we could leap headfirst into the design world of tomorrow, we had to think about the basic UX of the thing. Who would be using it? What did we want our main features to be? What would be the user’s primary journey? Turns out that even when designing for cutting edge tech, it’s always helpful to start with something familiar — good old pen and paper. We drew up some userflows and some initial layout sketches, including an ideal interface based on familiar objects like vinyl records and their sleeves, which would be simplified down the line.

Eventually, we came up with a wireframe of sorts for our experience — we could only achieve so much of our vision on pen and paper, though. There was little point in creating an extensive mockup in Photoshop or Illustrator in the limited timeframe we had, especially when we weren’t sure what would work style-wise in the VR environment — so it was time to strap on the headsets and start testing.

Trial and error…and error…

It was through a pretty long process of trial and error where we would figure out what would work in terms of the general look and feel of our experience. We kept in mind all the basics from our research — users don’t like spaces that are too bright, so we kept the colour palette dark. Floating text was a no, so we made sure to align any copy with objects in the scene. We wanted to make use of the depth of the VR space too, but still have the objects further out be visible, which was a tricky problem to solve.

There were some limitations with A-Frame — some animated transitions had to be left out so that the experience could run smoothly. We faced a few challenges in getting the audio to play in a way that made sense too — we didn’t want the user to be turning their head and triggering sounds every second — so we made the decision to have the user click to play the audio to avoid any sound clashes. We added a simple media player UI too, allowing the user to skip between tracks, pause the track if they wanted to, and generate a playlist that they could listen to in Spotify later on.

We were close to achieving the vision set out in our sketches, but it needed something more than just floating heads in a black space. We needed to make the environment a little more ‘real’ — adding a simple horizon with a recognisable landscape made a huge difference. A sky, a ground and a subtle gradient gave the space some character. The user was no longer staring into an empty black void and was instead in an environment that felt at least a little familiar. Spotify’s own collection of artist images completed the interface.

Back to development

Thanks to the simple way in which A-Frame handles assets, it didn’t take long at all to incorporate all of our assets into the project giving us a functional interface. A-Frame provides a wonderful inspector which can be used to tweak the positions of all the elements in real time, eliminating the need for trial and error.

So we had our layout and our interface. Audio was playing, playlist saving was functional, and it all worked rather well in a desktop browser. But this was a VR project, and whilst it ran fine on powerful desktop hardware, we always wanted this to be a mobile experience too.

The complications of the bleeding edge

The amazing team working on A-Frame are exceedingly active, and each new version brings performance enhancements across the board, but browsers are less frequently updated. We were developing with beta versions of browsers, and the WebVR spec (which allows integration with VR headsets, and hardware acceleration) was yet to be implemented. As such performance was well below where we wanted it to be.

And then, in the first week of February, Chrome 56 for Android came out of beta. The impact was huge. Performance increased dramatically. The lag was greatly reduced, the image was far less pixelated, and everything just worked.

So how did we do?

We are delighted where we have got to, with a stable build running on publicly accessible browsers which anyone can use. There are still issues to overcome, but we have learned so much.

But does it solve our original goal — to produce something in VR which works better than its 2D counterpart?

We believe it does.

We did have to drop some features. We wanted to enable voice control, but due to a “feature” with Chrome, this would always have to be preceded with a button press, as It isn’t possible to have the microphone constantly listen for commands. But overall we achieved pretty much everything we set out to do.

Look out for more developments in the future — but for now, please enjoy Band Explorer VR.

As your brand and technology partner we’ll help you discover what’s possible.

We’ll make sure that the way we work is the right fit for your business, and we’ll ask the right questions to make sure you’re set up for success.

Together, we’ll create a unique advantage for your brand.

We can help build your brand through technology, email info@thetin.net

--

--