MountainDew x TitanFall Technical Review

Morgan Villedieu Rota
11 min readNov 17, 2016

--

Mountain Dew and Doritos partnered with TitanFall 2, one of the most awaited games of 2016, and asked Firstborn to create a digital experience to drive sales and reward redemption. We developed a site that lets users redeem codes found on specially-marked packs of DEW and Doritos products for exclusive in-game rewards.

We partnered with E.A. to create an infrastructure that seamlessly integrates user flow, code validation and rewards implementation into the existing game API. We also created a custom backend system that allows DEW and Doritos’ global teams to retrieve data points and codes.

The website used DOM rendered within WebGL, which gave us the power of WebGL without having to worry about a fallback system. It also employed fragment shaders within that framework that allowed for full control of user GPUs. Using the GPU’s extremely powerful and fast parallel architecture let us add many visual effects to the site, all rendered in real-time.

Smoke, lightning, noise, glitches, graphic bending and displacement all worked together to create the illusion of 3D effects without any geometry other than pixel evaluation. Because the effects were procedurally generated, they were able to react in real-time when moused over/clicked/keyed — an impossible task for pre-rendered or video-based assets. The site loads quickly because everything was generated through code with a small amount of 2D assets — no video or 3D files that would normally slow down loading time and functionality.

Following is additional information about the technical aspect of the project, how we made it and how it works.

Instantaneous loading

DOM -> WebGL Texture.

Rendering DOM elements inside WebGL :

When we started thinking about the concept of the site we knew we couldn’t go for a WebGL-heavy website. Even though WebGL is widely supported we didn’t want to exclude people without WebGL, a slow connection or slow computer.

That’s why we came up with this original idea of trying to render the DOM elements within a texture to allow us to use them within the browser graphic acceleration while creating unique visuals using fragment shaders even on the DOM elements.

Live edit the DOM to see the influence on the WebGL

Why we didn’t use HTML2Canvas :

When we were researching how to get the DOM rendered in WebGL, the first thing we did was search in Google. Google pointed us to a neat little project called HTML-GL. After further investigation, it turned out it relied heavily on both Pixi and HTML2Canvas. The actual magic there was happening in HTML2Canvas, a tool that rasterizes the DOM so it can be rendered in the canvas. It’s a really impressive tool that basically loops through the DOM and redraws each element with the Canvas Drawing API. Unfortunately, because HTML2Canvas does not render actual CSS, it does not support all CSS properties. It can only rasterize the CSS properties it is built to support. In our case, since we wanted an exact replica of our DOM, that was a deal breaker. That’s when we decided to create our own custom solution.

Our Final Approach:

https://developer.mozilla.org/en-US/docs/Web/API/Canvas_API/Drawing_DOM_objects_into_a_canvas

First we created an SVG with a foreign object containing our complete HTML and CSS. We then converted that SVG to a base64 encoded image. That image could then be injected within a texture that we were able to use inside the WebGL. Every time we resized or modified the DOM, the base64 image regenerated and our WebGL texture automatically updated. And since the newly generated SVG used the same styles and media queries, it ended up being an exact duplicate of our DOM, which gave us the freedom to apply effects to what the user views as a regular webpage. Here are the steps broken out:

1 - We create an SVG with a foreignObject containing our markup and styles.

Example of an SVG with a foreignObject containing markup and styles:

var data = '<svg width="500" height="200">' +
'<rect x="0" y="0" width="500" height="200" fill="orange"/>' +
'<foreignObject x="0" y="0" width="500" height="200">' +
'<div xmlns="http://www.w3.org/1999/xhtml">' +
'<style>' +
'.wrapper { '+
'display: table;' +
'font-size: 60px;' +
'width: 500px;' +
'height: 200px;' +
'}' +
'p {' +
'display: table-cell;' +
'text-align: center;' +
'vertical-align: middle;' +
'}' +
'</style>' +
'<div class="wrapper">' +
'<p id="text">Your words here</p>' +
'</div>' +
'</div>' +
'</foreignObject>' +
'</svg>';

2 - Convert the SVG data to a Blob :

var svgBlob = new Blob([[data]], {type: ‘image/svg+xml’});

3 - Convert our Blob to a base64 data url :

var reader = new FileReader();var dataBase64 = reader.readAsDataURL(svgBlob);

4 - Using this base64 data we generate an image :

var img = new Image();img.src = dataBase64;

5 - Apply the image to a THREE texture :

Simply create a texture using the generated image.

var loader = new THREE.TextureLoader();
var myTexture = loader.load(img);

The only thing that might be considered a semi-tricky component here was creating the image out of an SVG — but all you need to do is create a string containing the data for the SVG and construct a Blob with the following parts.

  1. The MIME media type of the Blob should be “image/svg+xml”.
  2. The <svg> element.
  3. Inside that, the <foreignObject> element.
  4. The (well-formed) HTML itself, nested inside the <foreignObject>.

By using an object URL as described above, we can inline our HTML instead of having to load it from an external source.

Issues :

There are no external sources allowed. So all CSS, images and fonts needed to be inline. For CSS it’s as easy as creating a <style> tag. Images and fonts need to be base64 encoded. This can obviously increase your page weight a lot, so try to limit the usage of imagery and custom fonts.

Another issue was that a foreign object doesn’t support certain HTML elements or characters. For instance <br>, <input> or a single “&” will break the SVG Blob creation. To work around this we wrote a parser to convert all unsupported elements to div’s with some custom CSS classes to keep the proper styling.

Finally, browser support was a big issue. At the time of writing only Chrome and FireFox fully supported this technique. Internet Explorer doesn’t support the foreignObject and Safari threw a security error when we tried to draw the SVG Blob into the Canvas. Luckily our fallback was automatically in place with the actual HTML and CSS still being rendered normally when the effect was not applied.

DOM elements are part of the WebGL during the transitions

About the Graphics :

Why WebGL

The creative team, together with the developers at Firstborn, are always trying to push boundaries, that’s why this year we decided to create a unique interactive experience for the end user despite having to work from only a few image assets from the gamemakers.

Since the boundaries between Web technologies and Graphical engineering are being pushed by the new browsers compatibilities (WebGL 1.0 and WebGL 2.0 yet to come) and are almost supported everywhere, we decided to create a full experiment using browser graphic acceleration. Using the method above we also automatically and easily took care of the fallback version.

The library :

Pixi seemed like the perfect library since it’s designed to work with 2D assets using graphic acceleration. But when we were layering a bunch of custom filters on our sprites, we ran into some issues. Our UV’s got messed up, which resulted in us trying to resize all our imagery and assets within the shaders. After a while we were passing in so many custom offsets and positioning variables to our vertex shaders that we started to overwrite Pixi’s native positioning and resizing. That was the point where we thought Pixi might not be the right choice for us anymore. Pixi’s strength lies in moving and resizing multiple (nested) sprites with a couple of simple effects.

We started to think about creating our own WebGL sprites and position manager. What if we made simple plane meshes and applied our shader programs as materials to those? We thought that by using an orthographic camera we could still easily match our window and mesh size. Simple, right?

That is when we knew three.js would be a better fit for us. With all the built in positions tools, vectors etc. we were able to quickly rebuild our setup. After we had it all up and running we only had to focus on the core of our shaders.

Why real time computing vs pre rendered assets.

For this campaign a fast load time was top priority since past sites have experienced millions of code redemptions. Usually when you try to reduce load time, the user experience suffers. That’s why we wanted to make sure we kept load times to a minimum while making sure our users would still have an attractive and interesting user experience. We wanted to avoid the use of pre loaders, so complex 3D models were out of the question.

Using full-screen videos were also dismissed pretty early on. Besides their big file size, the glitchy art direction made us want to avoid obvious loops in the interface. If we were going to have glitches they needed to feel real and random.

We wanted to create an environment where the user would have subtle interactions (mouse movement, clicks or key presses) with what seemed to be a static background. To achieve this we applied GPU effects on static 2D background imagery. These effects are triggered randomly or by user input (mouse, key etc.). With this technique we only had to load a couple of images and some additional JavaScript for the effects. The result being that the site now loads insanely fast and the environment looks and feels like an interactive video. Nothing will ever feel like it’s looping since everything is actually being processed rendered in real time.

How we created our 2D screen based scene using THREE.

Using an orthographic camera we were able to get a perfect 2D projection without the image suffering any deformations based on the perspective applied to the camera. The orthographic camera guaranteed that the four left right up right borders of the window to keep the ratio. It also allowed matching of the size of the mesh to use with the size of the browser’s window.

Here is the difference in between an orthographic and a perspective camera with FOV.

Here is the difference between an orthographic and a perspective camera with FOV.

Since we solved our first ‘problem’ we just needed to take care of our positioning within the screen to fit perfectly with our window, but also with the DOM elements that are positioned using css to allow us to transition seamlessly in between both (When we grab the DOM inject it in a texture using the method above) .

Methodology :

After creating this normalized ‘2D world’, we created a few planes geometry (basic squares of 2 triangles) and we used these as sprites. On each of these sprites we applied a custom shader material (shader program = vertex shader + fragment shader) a basic quad Vertex Shader and a Fragment Shader. These took care of all the effects we wanted to apply by evaluating each pixel value. With this we are able to fully use the power of parallel processing of the GPU to generate proper real time effects.

WebGL pipeline is based on the OpenGL ES 2.0 specification

The Vertex Shader operates on each vertex. The Fragment Shader takes the output from the Vertex Shader and associates colors and depth values of a pixel at a certain position within the texture.

On top of that we also applied some global effects on the Frame Buffer using a few post processing passes.

Shaders effects :

Mesh shader material :

Below is a quick list of some effects used within our project: Bending effect, Wavy effect, Generated smoke (Fractal noise), Random Glitches, Light Flicker effect, 3D anaglyph Effect, Random Squares (we also used this for the transition in between locked and unlocked versions of the image).

Post processing passes :

Generative grid, Text Glitch, random part displacement, RGB shift, global noise, global glow effect, fault Effect, old screen dithering…

Composition example

As explained before, all the effects are being dynamically generated using the Fragment Shader. That means by using simple textures as a base we can generate ‘layers’ of effects, that will compose the end result.

Process example for effects compositing

In the above gif you can see the compositing of some of the effects. As you can see, two textures are being passed as uniforms: first the mask texture (Painted depth-map for image anaglyph 3D effect on the red channel, Lights on green channel) that we used to define area of compositing and to apply effects, and second the actual background image.

Miscellaneous about Fragment shader :

We could write about each effect separately since they are all dynamically generated. Unfortunately that would take too long and could easily be the subject of another article. We used a lot of tricks to maintain a steady frame rate. Keep in mind that the Vertex Shader is done on every vertex, while the Fragment Shader is done on every pixel. You need to be careful with what you are doing or you can easily drop the frame rate and make the website laggy and unpleasant to use.

Here is a big overview of some of the tricks we used.

  1. Avoid too many texture reads on the blur for glow effects and try to use fast noise as much as possible.
  2. Allocate effects in the same shader. When using post processing you need to know that each “effect” will use its own shader and render pass. This is overkill. We tried to merge as many of our effects as possible into a single render pass to avoid too many draw calls.
  3. Lower the resolution of your fragment shader when it isn’t affecting the visual aspect of your output.
  4. Sometimes you can easily lower the precision of your computations without degrading the final result.
Generated depth bending effect using FS only

Conclusion :

With the right knowledge of the possibilities and impossibilities of WebGL and the DOM we can all create projects that can go beyond the full screen video or 3D rendered experience. We’re not saying these are bad, we actually love a well crafted experience. We’re just saying there’s more to WebGL to broaden our horizon and open ourselves up for even better experiences. We hope you enjoyed reading it and don’t hesitate to reach out if you have any questions or suggestions.

Also, Thanks to @rickvanmook @hector_arellano @maxime_blondeau for helping me writing this.

--

--