Rendering a cube-mapped 360 video in Three.js

Facebook released an article which talks about using a cube map for 360 video rather than an equirectangular video. They claim that using a cube map results in “25% fewer pixels per frame”. Even if you’re not working at “Facebook’s scale”, this can mean higher quality 360 video for the same bandwidth! We want that, here at Digital Domain.

Unfortunately, they don’t talk about how to actually render their cube maps.

Our HTML5 player is based on Three.js. For normal equirectangular videos, we use the built-in SphereGeometry to map equirectangular videos onto the inside of the sphere, then place a camera in the middle. Cubes follow the same principle, except you are inside of a cube. But, the default BoxGeometry in Three.js…. doesn’t work quite right

It’s mapping the entire video onto each face of the cube. The UVs for the cube just aren’t set up for Facebook’s cube map format of Front-Back-Top above Bottom-Left-Right.

I started by examining the Three.js source core with the idea to modify the UVs in code. This turned out to be annoying to work with, and there was no documentation on the order of faces in Three.js, so I turned to another idea: making a cube model and UV mapping it in Blender.

I made a simple image with precise dimensions, each square 1000x1000 px, which showed each face’s position, to map onto the cube:

Then began the process of mapping each face of the cube onto this UV map

I also took the opportunity to flip the normals of the cube. For spheres, we just set their scale to -1 in javascript, which worked fine. Doing this in the model itself eliminates that step.

Exporting this to a Three.js JSON file results in a nicely formatted file. I can tell that the UVs are mostly correct because they are either 1 , 0 , 1/3 , 2/3 , or 1/2 .

With that done, it was time to import this file into Three.js. The basics of this can be seen at

As I quickly found out, the Blender exporter for Three.js can export as either a Geometry or a BufferGeometry. Buffers are preferred because it stores a little more data on the GPU, not that it matters for something as simple as a cube, but everything counts.

However, the JSONImporter in Three.js doesn’t seem to support BufferGeometry, simply erroring when the JSON data is passed in. So after some fiddling I was able to write this quick and dirty “importer” (note that I am using ES6 and transpiling, also relies on e.g. Float32Array being in the window scope):

Loading it in our player worked! (ish)

There were a couple problems:

  • The rotation and flip of the UVs were wrong, but that was expected.
  • The are seams at the edges of the cube. This is because the edges of the neighboring cubes are bleeding into the wrong faces

The orientation issue is easy, just rotate and flip the UVs in Blender. I decided to take a single frame of the video and map it in Blender to make things easy. I used the following ffmpeg command:

ffmpeg -i WSL_Tahiti-MASTER_FBcubemap_10MbpsVBR.mp4 -vf "select=gte(n\,1000)" -vframes 1 cubemap_frame.png

After flipping and rotating the UVs around in blender and fighting with them being precisely on the seams, we get a valid cube projection!

Trying it out in the player, and we have a mostly-working cubemapped 360 video:

Still two issues:

  • The seams are pretty obvious and break the immersion
  • Straight lines have a kink in them, as you can see in the surf line in the above picture

The seams appear because of a combination of video compression for the cubemapped video, which can essentially “smear” the edges of each face onto neighboring faces, as well as floating point inaccuracy when rasterizing the image, which may take a texture sample from a neighboring face.

A great post talking about UV map seams:

Ideally, we could separate the faces in the cubemap slightly, and bleed their edges out until they meet. This would solve the problem on the encoding side, at the slight loss of resolution for each face.

Alternatively, it might be possible to break up the texture into 6 separate textures which are each individually mapped onto the cube. I am unsure of whether this would improve the seam problem or not. It would be worth a try, because then a texture wrapping of CLAMP could be set on each texture.

As a quick easy fix, I turned back to the UV map in Blender. Simple by shrinking the UVs for each face ever so slightly, we can ensure that when they do sample on the wrong side, it is still taking data from the correct face.

The resulting JSON cube:

It could probably use some tweaking to find the maximum size of the UV squares without seams.

As for the distortion, it was actually a bug in our 360 player. Our camera was raised 10 units in Three.js. It wasn’t noticeable in a spherical projection, but horrible for a cube projection. Make sure your camera is in the exact center of the cube!

And then we have a fully functional 360 viewer using a cube map!

It’s not perfect. I can still tell where the seams of the cube are. Approximately a vertical line 1/3 of the way through the image. It’s not overly noticeable when playing a video though. You can check out the final result here.

We’ll likely be rolling out cubemapped videos wherever we can, because they essentially offer free video quality.

On our radar for the long term is also what Facebook calls Pyramid Encoding, where most of your image quality (and thus bandwidth) goes to where the user is looking. This requires many different streams for each direction the user may be looking combined with algorithms to choose a stream, so it is not a short-term project.