The website contains some nice parts and I thought would be interesting to share some tech insights.
DOM to WebGL
I don’t know if I would define it as a trend, but more and more websites are mixing
WebGL with classic
DOM/CSS to have more control over the graphic pipeline. This allows more freedom and creative opportunities that would otherwise be impossible or result in a less performant experience.
threejs for the WebGL part, although we’re mainly using it as an abstraction layer. We also used other utilities like
bidello. You can check this GitHub repo github.com/luruke/antipasto for more informations.
The approach used is fairly simple:
- Code the page as you normally would (progressive enhancement FTW)
- Add a full-screen fixed canvas as background
- Track the position of the DOM elements you want to port into the WebGL world
- Init the Meshes/Shaders and once they’re ready, and hide the original DOM elements
- When scrolling, keep the position of DOM elements and WebGL in sync
Here’s some simplified code snippets:
We then create a kapla component. Kapla is a little library we use internally to bridge the gap between DOM and JS. In this case we’re mainly using for its
We then use a little class to register / unregister the
dom->gl bindings. See how our
data-type will load the
We also have our
Button class that extends
dom3D, which in turn extends threejs
(Notice how the material and the geometry are outside the instance, so that we can leverage some optimisation and reduce WebGL state changes)
Finally, there’s the
dom3D class, which is the parent class used by all the elements. The
component() mixin is from
bidello , and it basically enhances the class, automatically calling the methods
onRaf when necessary.
The main magic happens inside of the
updateSize, updatePosition and onRaf functions. Those methods make sure the WebGL element is exactly the same size and position of the DOM element.
calculateUnitSize on the
PerspectiveCamera will calculate the necessary width and height (in unit size) of an element (at position
vec3(0, 0, 0)) to completely fill the camera.
We also built some utilities for loading and caching textures and stuff like
background-size: cover implementation in glsl.
The overall technique seems to be working quite well. The built-in threejs frustum culling does its job, and performance is ok.
One problem though, if you incur frame drops during scrolling, you’ll notice that the DOM elements are scrolling smoothly but not the ones in WebGL. That’s because in case of frame drop, the browser tends to prioritise its own UI instead of the JS execution. We “solved” this by having a “virtual scroll”, so that we can make sure the two parts are always in sync.
Even though this technique has its own limitations and accessibility issues, it opens a whole universe of creative possibilities. We might even do a more robust and reusable solution for the future.
How cool would be if browsers would expose more low level API over their internal graphic pipeline?
The whole brand identity of the event, presents a lot of “waves/glitches/datamoshing” effects. We wanted to animate those lines as background instead of just using a static image.
Here’s the solution we’ve ended up using:
In Photoshop, the
Filter->Stylize->Wind->Blast does a very similar effect, so starting from a linear UV sampling, we can add this effect and end up with this texture:
Then we use this UV texture to lookup a static texture.
To animate it, instead of using this static UV texture, we’ll use a framebuffer object. As this interactive background can appear on multiple parts of a page, the FBO (frame buffer object) is shared among those.
You can check the source code for the FBO helper here.
And here’s the fragment shader.
The technique is very similar to gpu/pixel sorting
We create 256 points with a random position, then we store them in a 256x1 framebuffer, where those points are then animated.
rgb values represent the 3D position of each particle and are stored in a
floatType texture. The
y position (or
g channel) is incremented, and when it exceeds
5.0 it goes back to
-5.0 and so on.
Then we draw the actual points with
Points (which is actually just a
drawType). Instead of drawing directly on the screen, we draw them in a
We have points moving, but now we need the trails. We create another FBO that acts as a buffer that adds the new frame each time, and reduces the opacity of the previous one.
Finally we can display our FBO. Note how the texture was just
512x512, but as we’re using
LinearFilter to draw it full screen, the result is still good.
Before rendering the WebGL scene on the screen, a postprocessing pass is applied.
Here are the few effects used:
- Glitch effect (initially based from this codepen)
- A curve displacement based on scroll speed
- Chromatic aberration based on mouse trail
- Film grain
The mouse trail is used for the chromatic aberration and displacement in postprocessing, but it also enhances the background waves we saw earlier.
Here’s how the code looks like for the mouse trail:
A circle is drawn at mouse position with different intensity based on mouse speed. Then the trail “fades” up with time.
Here’s how the FBO is used for the RGB offset:
For the page transition we used barba.js with a transition that uses both CSS clipping and the WebGL glitch used as postprocessing.