Motion Blur for mobile devices in Unity

Michael Short
spaceapetech
Published in
5 min readSep 6, 2018

--

Wikipedia defines motion blur as:

Motion blur is the apparent streaking of moving objects in a photograph or a sequence of frames, such as a film or animation. It results when the image being recorded changes during the recording of a single exposure, due to rapid movement or long exposure.

When we capture an image with a camera, the shutter opens, the image is captured by the sensor and then the shutter closes again. The longer the shutter is open, the more light the sensor can capture. However, leaving the shutter open for longer also means that the image being captured can change.

Imagine we are trying to capture an image of a car speeding along a race tack. If the shutter stays open for a whole second, the car will have shot past the camera and the entire image will be a blur. Now, if we were to open the shutter for a fraction of that time, say 1/500th of a second, chances are we will be able to capture the image with no blurring at all.

Motion blur is a side effect of leaving the shutter open for a long period of time. In games it can be desirable to simulate this effect. It can add a sense of speed and motion to our scenes. Depending on the genre of the game this can add a whole other level of realism to the game. Genres that may benefit from this effect include racing, first person shooter and third person shooter to name a few.

Pipeline Overview

We wanted to develop a motion blur effect for one of our games, a racing game. There are a number of different implementations currently available.

Frame Blur

The simplest method of simulating motion blur is to take the previous frame’s render target and interpolate between that and the current frame’s render target. When programmable shaders first came about, this is how it was done. It’s really simple and easy to implement and it doesn’t require any changes to the existing render pipeline. However it isn’t very realistic and you don’t have the ability to blur different objects in the scene at different scales.

Position Reconstruction

A step up from frame blurring is position reconstruction. In this method we render the scene as we normally would. Then we sample the depth buffer for each pixel in the render target and reconstruct the screen space position. Using the previous frames transformation matrices we then calculate the previous screen space position of that pixel. We can then calculate the direction and distance, in screen space, and blur that pixel. This method assumes that everything in the scene is static. It expects that the world space position of the pixel in the frame buffer does not change. Therefore, it is great for simulating motion from the camera, but it’s not ideal if you want to simulate finer grained motion from dynamic objects in your scene.

Velocity Buffer

If you really need to handle dynamic objects, then this is the solution for you. It’s also the most expensive of the three. Here we need to render each object in your scene twice, once to output the normal scene render target and again to create a velocity buffer (usually a R16G16 render target). You could circumvent the second draw call by binding multiple render targets if you wish.

When we create our velocity buffer, we transform each object we render from object space by the current and the previous world-view-projection matrix. Doing this we are able to take into account world space changes aswell. We then calculate the change in screen space and store this vector in the velocity buffer.

Implementation

We decided to implement the Position Reconstruction method.

  • Frame blurring wasn’t an option — this method was too old school and didn’t offer enough realism.
  • The camera in our game follows the players vehicle which is constantly moving, so even though we can’t simulate world space transformations we should still get a convincing effect.
  • We didn’t want to incur the additional draw call cost of populating velocity buffer.
  • We didn’t want to incur the additional bandwidth overhead of populating the velocity buffer.
  • We didn’t want to consume the additional memory required to store the velocity buffer.

Code

We start by rendering our scene as we usually would. As a post processing step we then read the depth of each pixel in the scene in our shader:

Then we will reconstruct the screen space position from the depth:

In C# we pass a transformation matrix into our shader. This matrix will transform the current screen space position as follows:

  1. Camera Space
  2. World Space
  3. Previous frames Camera Space
  4. Previous frames Screen Space

This is all done in a simple multiplication:

To calculate this transformation matrix we do the following in C#

We can now calculate the direction and distance between the two screen space vectors. We then use the distance as a scale and sample the render target along the direction vector.

Controlling the Motion Blur

Once we got all this on-screen, we quickly decided that there was just too much blurring going on. We want most of the scene to be blurred, but the artists wanted vehicles and drives to be crisp. In order to achieve this, with as little pipeline impact as possible, we decided to use the alpha channel to mask out areas of the scene that we didn’t want to blur. We then multiplied this mask by the blur vector to effectively make the blur vector [0, 0].

To add to this we also found that objects in the distance shouldn’t blur as much as those in the foreground. To achieve this we simply scaled the blur vector by the linear eye (view space) depth, calculated from the depth buffer (LinearEyeDepth) is a helper function inside the Unity cginc headers.

Conclusion

Out of the box, Unity will support motion blur by generating a velocity buffer for you, but for our requirements this was overkill. We always need to keep in mind that we are a mobile studio, so we need to take performance into account every step of the way. The method we implemented has its tradeoffs, we had to add distance based scaling to prevent objects in the distance blurring too much. However, it gave us a convincing effect due to the fact that our camera is constantly moving. If you have any questions or feedback, feel free to drop me a message on Twitter or leave a comment below.

--

--