How to Create VR Video in Blender
Making 360° 3D video for fun and profit, with free software!
Blender is an amazing 3D creation suite but …there’s a lot to it. My own efforts to produce VR video in Blender involved some trial and error, but the results were really motivating. Based on that experience, I’ve decided to write up a guide to help others have similar success, while skipping all of the trial and error.
If you want to turn your own 3D scene into a VR video and share it on YouTube, you’re in the right place. Follow this guide, and you’ll end up with an immersive 3D video your viewers will want to reach out and touch.
Note that when I wrote this article, Blender 2.79 was the newest version — that’s no longer the case! If you’re using a newer version, you’ll likely need to adjust these instructions. Hopefully I’ll be able to spend the time on an updated article soon.
The Short Version
For those who are already experienced with Blender and have a scene ready to go, here’s what you need to know. There’s a lot more detail below, so if you’re confused, keep reading.
- Use the Cycles rendering engine.
- Set your render resolution to be landscape orientation with a width that’s twice as large as the height. Both width and height must be powers of two.
- Optional but highly recommended: Set your camera’s rotation to 90 degrees in X and 0 degrees in Y.
- Enable the Views section of the Render Layers properties. Within that section, select Stereo 3D.
- In the Lens section of your Camera properties, choose Panoramic. Set Type to Equirectangular. In the Stereoscopy section choose Off-Axis and check the Spherical Stereo box. Scale your Convergence Plane Distance and Interocular Distance to match your scene’s scale (e.g. if your scene is 1/8th scale, divide the default distance values by 8).
- In the output section of your Render properties, set the Views Format to Stereo 3D and set the Stereo Mode to Top-Bottom.
- Use Spatial Media’s metadata injector (available at https://github.com/google/spatial-media/releases/) to mark your rendered video as 360° 3D. Check the boxes for “My video is spherical (360)” and “My video is stereoscopic 3D (top/bottom layout).”
- Play the video in your favorite viewer.
The Long Version
Choose a Blender Scene
In these instructions I use a public domain scene from Blend Swap called “medieval kind of seaport”. You can either create a free Blend Swap account and download that scene for yourself to follow along, or else use any other scene you want. Your scene and materials should be configured for the Cycles rendering engine, which brings us to the next topic…
Use the Cycles Render Engine
Blender comes with two render engines: Blender, and Cycles. Of the two, only Cycles can handle the spherical video tricks used in this guide, so whatever scene you choose must be set up for Cycles. If you’re using the seaport scene — or many other scenes designed for photorealism — you’re all set. Scenes originally intended for the Blender Render engine, though, might require some effort to convert. The render engine selector is at the top of the UI near the logo & version number like so:
Animate the Camera
Once you have a scene that’s textured and lit for Cycles, you’ll want to add some motion to make the video interesting. For example, this seaport has an arch, so why not take our viewers on a trip through it?
To do that, we’ll need to create two “key frames” in Blender’s animation timeline: one key frame for the start location and another for the end location. Once those keys are set, Blender handles smoothly moving the camera between them over time.
In a nutshell, the steps are:
- Go to the first frame of the animation. The keyboard shortcut is
- Select the camera by right-clicking it in the 3D Viewport, or by left-clicking it in the Outliner.
- In the Numbers panel — which you can toggle with
N— set the camera’s X rotation to
90and its Y rotation to
0. Using other values may disorient your viewers, so think carefully before changing them.
- Move the camera to a good starting location using either
Gand then the mouse, or else the Numbers panel. For reference, I started my camera out at
(-0.5, -0.3, -1.5).
- Add a location key frame. With the camera selected, hover the mouse over the location editor in the numbers panel and then press
I. If the location editor turns yellow, then the location key frame is set.
- Go to the last frame of the animation with
- Move the camera to the ending location (as in step 4). For reference, I put my camera’s ending location at
(0.6, 4.0, -1.5).
- Add another key frame (as in step 5).
Check your work by peeking through the camera (press
0 on the numpad or browse the 3D Viewport menu: View > Cameras > Active Camera), then watching the animation (
Alt+A) and tweaking as necessary. You can add more key frames if you wish, but keep things as subtle and smooth as possible. Your viewers will feel uncomfortable or sick if the camera motion changes sharply in VR.
Configure the Render Engine
Since you’re not doing a typical flat image render, you’ll need to dive into some nooks and crannies of Blender’s settings where you’ve probably never had a reason to wander before. Our first stop is the Render Layers properties, where you’ll need to enable the Views section and make sure Stereo 3D is selected.
Next up is Render properties. In the Dimensions section, select a render resolution that is twice as wide as it is tall and a power of 2 in each dimension. For example, 2048 by 1024 would work, since 2048 = 2¹¹ and 1024 = 2¹⁰ (and half of 2048). YouTube will accept up to 8192 by 4096, but I’d suggest starting out small and quick for testing unless you’ve got either a free render farm or superhuman patience. For simplicity’s sake, make sure the percentage resolution scale slider is set to 100%, 50%, or 25%, since other values will violate the power-of-two rule. As an example, for my test render I set a base resolution of 2048 by 1024 and set the resolution scale slider to 25%, for a final resolution of 512 pixels wide by 256 pixels tall. This is tiny but fast, perfect for a test render.
In the Output section of the Render properties, you have a choice to make: tell Blender to produce a single video file, or tell Blender to produce a folder full of individual frame images. Having Blender build the video file for you right away is simplest, but risky — you’ll lose the whole render if something goes wrong halfway through. With individual image files, even the worst crash will only ever cause you to lose a single frame, but the trade off is that you’ll have to do slightly more legwork to assemble the images into a video file. Video is easier, images are safer.
No matter which option you go with, set the Views Format to Stereo 3D, set the Stereo Mode to Top-Bottom, and choose a destination folder you’ll be able to find easily later.
Encoding Option A: The Easy Route
For the easier route, choose output type “FFmpeg video.” Then in the Encoding section’s Presets dropdown, choose “h264 in MP4.” For your final render you’ll probably want to increase the quality settings, but for our low-resolution test render there’s no need.
Encoding Option B: The Safe Route
If you want the safer route, choose output type “PNG.”
Later, when Blender’s done rendering all of the individual frames as .png images, you’ll need to run a separate program to turn the images into a video. In my case, I just ran FFmpeg myself instead of having Blender run it for me. The terminal command I used looked like this (it should be a single line when you run it):
$ ffmpeg -r 30 -i blender_output/%04d.png -c:v libx264 -vf fps=30 -pix_fmt yuv420p output.mp4
Configure the Camera
In the Camera properties, expand the Lens section and choose Panoramic, then in the Type drop-down, choose Equirectangular. If you don’t see Equirectangular in the list of types, double check that your rendering engine is Cycles and not Blender Internal. In the Stereoscopy section, choose Off-Axis, and check the box for Spherical Stereo.
This next part is going to require us to take a break from tedious box-checking and do some arithmetic. To make the 3D effect in our final video both comfortable and realistic, we need to supply Blender with correct values for Convergence Plane Distance and Interocular Distance. If we set the numbers too large, the depth will be over-exaggerated. Too small, and it will be too subtle, making our scene feel flat.
Blender has its internal measurements set to meters (unless you’ve fiddled with the settings), so whenever there are default numbers, they’re based on the assumption that 1 blender unit in your scene corresponds to 1 meter in real life. For example, if our scene is scaled down to half the scale of “real-life,” then we’ll need to divide Blender’s defaults by 2 to match.
So how do we determine our scene’s scale? In the seaport scene, we have 3D models of buildings with doors, and we can compare the height of these doors with the height of a real door, which should be around 80 inches (2.032 meters). Selecting a door model in the scene, I see that it’s 0.323 meters (12 inches!) tall, meaning our scene is roughly scaled to 0.323 / 2.032 or 1 / 6.291, and that we should divide Blender’s values by 6.291 to make them realistic. Blender’s defaults were 1.95 meters for Convergence Plane Distance, and 0.065 meters for Interocular Distance, so after dividing by 6.291 they’re 0.310 and 0.010 respectively.
Render a Low-Resolution Test
The keyboard shortcut to render a single frame is
F12. Don’t be alarmed when it starts to render in a dim red color — that’s Blender’s way of indicating that you’re only seeing one eye’s perspective (the other eye will be blue, a la old-school anaglyph 3D glasses). If your settings are correct, you should end up seeing something that looks like this within the Blender UI:
If so, then it’s time to render your low resolution test video. Click that Animation button at the top of the Render properties panel, and sit back and wait. Depending on your computer hardware, the Blender settings you chose, and the complexity of your scene, this could take anywhere from minutes to weeks.
When the rendering and encoding are all finished, you should have an odd-looking square video with one eye’s perspective positioned above the other. It should look something like this:
If you uploaded it to YouTube at this point, it would play back exactly like you see above: warped, square-shaped, and split across the middle. To fix this, you’ll add metadata into the video file that will tell YouTube to give it the special 360° 3D treatment we want. Fortunately, adding the metadata is easy using Spatial Media’s Metadata Injector, which you can grab from Github here: https://github.com/google/spatial-media/releases/
Launch the Metadata Injector, then click Open and select your video. Check the boxes for “My video is spherical (360)” and “My video is stereoscopic 3D (top/bottom layout),” and then click “Inject metadata” to choose where the modified video will be saved.
Upload to YouTube
Now for the big moment! Your low-resolution test video is ready for prime time, so upload it to YouTube just like any other video. When processing is complete, you’ll be able to view it in 360° like this:
Besides the low resolution, it looks good! If you view it on a mobile device from within the YouTube app, you’ll see a Google Cardboard icon.
Tap that icon, and you’ll get immersive 3D, like this:
Take a test drive, and make sure everything looks as expected. If so, it’s time to crank up the resolution in Blender (YouTube accepts a max resolution of 8192 by 4096, if you want to go all-out) and then repeat the last few steps. Specifically, you’ll want to increase the video encoding quality settings, render & encode a new video, inject the metadata, and upload.
That’s it — you now have everything you need to create beautiful video experiences in immersive 3D! Let your creativity run wild, and share your hard work with the world. My own full-resolution render is at the top of this page as an example. Have fun!
Wait! Help Others?
Thanks for making it this far. If you found this useful or interesting, please give the clap icon a few clicks below. Not only does that help other readers find this more easily, but also it tells me that I was able to help someone out, which motivates me to write more guides like this one. Thank you!