WebGL Masking & Composition

This month, as part of the wonderful codevember initiative, I’ve been working a series that combines cinemagraphs with WebGL content. I love cinemagraphs, and using them to contextualize the WebGL animations I enjoy making has yielded some interesting results.

The pieces themselves have two main components; a WebGL particle animation system, powered by THREE.BAS, my THREE.js extension for shader based animation, and a masking/compositing approach. The repo for THREE.BAS has some documentation, examples and a brief explanation of the thinking behind it. I may further elaborate on it at a future post, but today I will focus on masking and compositing.

Alpha Masking

An alpha mask is a grayscale image that is used to mask out content with degrees of transparency. Below, you can see the original image and the alpha mask that I used for the WebGL content.

When the mask is applied, pixels under the white area are rendered fully opaque, and pixels under the black area are rendered fully transparent. The grays in between create an alpha gradient for smoother blending.

You can see the final composition below. I simply place a WebGL canvas over the cinemagraph, and apply the alpha mask to it.

There are a number of ways we can use alpha masks on the web, both inside WebGL/THREE.js and through CSS. Because support for CSS image masks is still iffy, I decided to add the alpha mask as a post processing step in THREE.js.

Digging into how shaders/post processing works goes well beyond the scope of this post, but this tutorial gives a good introduction to the concepts behind it.

This is code for the alpha mask post processing step:

var maskPass = new THREE.ShaderPass({
uniforms: {
// the underlying image
“tDiffuse”: { value: null },
// the alpha mask
“tMask”: { value: null }
},
vertexShader: [
“varying vec2 vUv;”,

“void main() {“,
“vUv = uv;”,
“gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);”,
“}”
].join(“\n”),
fragmentShader: [
“uniform sampler2D tDiffuse;”,
“uniform sampler2D tMask;”,
“varying vec2 vUv;”,
“void main() {“,
// get the current pixel color
“vec4 texel = texture2D(tDiffuse, vUv);”,
// get alpha based on the color of the mask image
“float alpha = texture2D(tMask, vUv).r;”,
// apply the alpha to the current pixel
“gl_FragColor = texel * alpha;”,
“}”
].join(“\n”)
});

It’s based on the THREE.CopyShader, which simply passes through the contents of the screen to a subsequent post processing step. Instead of passing the pixel values directly, the mask pass reads an additional texture (the alpha mask) and applies an alpha based on the red channel of the mask at the same UV coordinates.

// get alpha based on the color of the mask image
“float alpha = texture2D(tMask, vUv).r;”,

This works because the mask image is grayscale, so the RGB values are all equal. In shaders, color channels are interpreted as numbers between 0.0 and 1.0. Multiplying a pixel color by 0.0 will make it fully transparent, while multiplying it by 1.0 will not change it at all.

Making a scene

The alpha mask approach creates some interesting possibilities, but it has limitations if you want to treat the cinemagraph as an actual three-dimensional space, like below.

It should be possible to have the 3D objects pass both behind and in front of the statue using masks, but this makes things needlessly complicated, and may involve rendering the scene several times. Instead, I simply added a 2D cut-out of the statue in the image to the THREE.js scene, and made the particles swirl around it.

Below is the same composition without the underlying image, and the mouse camera controls enabled.

The only tricky part here is determining where to place the 2D cut-out image in the 3D scene, so that it has the same dimensions as the image in the DOM. Basically, we need to calculate the size of a plane at a given distance from the camera, such that the plane is exactly the same size as our viewport (which is the same size as the image behind it).

Fortunately, this is pretty easy as long as the camera is looking straight at the scene:

var cameraZ = camera.position.z;
var planeZ = 5;
var distance = cameraZ — planeZ;
var aspect = viewWidth / viewHeight;
var vFov = camera.fov * Math.PI / 180;
var planeHeightAtDistance = 2 * Math.tan(vFov / 2) * distance;
var planeWidthAtDistance = planeHeightAtDistance * aspect;

This formula uses the camera field of view (fov) and the aspect ratio of the viewport (width / height) to calculate the desired size for the plane on which the cut-out image is rendered as a texture.

All that’s left to do is make the 3D particles rotate around the same Z coordinate as the plane, and you’ve got yourself a scene.

Depth masking

Further iterating on this, I wanted to apply the same concept, but with an added layer of depth. This resulted in the composition below.

Here the 3D particle swarm starts behind the back building, swirls behind the front building, then moves in front of the back building, before going off screen in front of the front building. This effect is achieved by creating a little mock scene over the original image, matching (somewhat) in perspective and size. Once that is in place, coordinating the motion and layering becomes much easier.

Below is a debug version of the same composition, with the mouse camera controls enabled again.

The two white planes that represent the buildings in the 3D scene are not rendered in the final composition. I could have textured them using the same approach as the Bruce Lee statue, but this would have been harder because of the camera position.

Since the actual buildings are already in the image, the only thing we care about are their dimensions and position in the 3D scene. Because of this, we can discard the color buffer after the buildings are rendered, as long as we keep the depth buffer intact.

// clear color and depth
renderer.clear(true, true);
// render the two buildings
renderer.render(buildingScene, camera);
// clear color (but not depth)
renderer.clear(true, false);
// render the particles
renderer.render(particleScene, camera);

With this approach the planes that represent the buildings essentially become a depth mask for the particles; invisible walls for them to move behind.

What’s next

I’m pretty excited to have stumbled upon this format. Codevember is far from over, and I intend to make at least a couple more contributions. One area I want to explore is timing the WebGL content to the cinemagraph. Unfortunately, this is pretty much impossible with gifs, but I can use video or a png sequence for both the cinemagraph and the mask to achieve some cool effects.