Post processing with WebGL
Post-processing consists of adding effects to the final rendering, such as blurring, color correction, bloom effect, etc.
Traditionally, rendering is carried out directly in the canvas. Post-processing, on the other hand, allows us to capture the rendering first, apply transformations to it and then have the final rendering with these transformations.
Here’s a representation of a traditional scene where rendering is carried out directly in the canvas :
Basic principle
To do this, we create a Frame Buffer Object (FBO), an area where we can attach buffers (also called attachments) that contain rendering data, such as textures, for example. In other words, our FBOs will render to a non-visible (off-screen) environment.
This is the schematic with a single modification step, which we call a pass. We can have several passes for our final render.
Having several passes allows us to break down our post-processing effect, for example we can have one pass for color processing, one pass for ambient occlusion, etc. Using different rendering passes requires the use of two FBO(s), as we can’t read and write to the same texture at the same time. To get around this problem, we use a technique called “ping-pong”.
- First pass : Render in the first FBO (say FBO A). This first pass applies the first post-processing effect to the initially captured image or to the result of a previous rendering chain.
- Swap : The roles of the FBOs are exchanged for the next pass. The FBO that was the rendering target (FBO A) now becomes the source, and the second FBO (FBO B) becomes the new rendering target.
- Next pass : Apply the next pass using the image stored in FBO A (now source) as input, and write the result to FBO B. And so on for each new pass.
Creating our project
Moving on to the demonstration, I’m going to use Vite JS and the OGL library the code but it’s perfectly possible to do it with other libraries such as Three JS or Babylone JS.
npm create vite
>>> choose Vanilla Javascript
>>> cd your_project
>>> npm i your_project
>>> npm i ogl
>>> npm run dev
Standard Scene
In this first part, we’ll create a basic 3D scene, containing a single mesh. To simplify reading, all our code will be in a single class.
src/main.js
import {Renderer, Camera, Program, Mesh, Box, Transform} from 'ogl';
import "./style.css";
import baseVertex from "./shaders/base/vertex.glsl?raw";
import baseFragment from "./shaders/base/fragment.glsl?raw";
class GL {
constructor() {
this.createGL();
this.createMesh();
window.addEventListener('resize', this.resize.bind(this), false);
this.resize();
this.update = this.update.bind(this);
requestAnimationFrame(this.update);
}
createGL() {
// RENDERER
this.renderer = new Renderer({dpr: 1, antialias: true});
this.gl = this.renderer.gl;
document.body.appendChild(this.gl.canvas);
this.gl.clearColor(0.0, 0.0, 0.1, 1);
// CAMERA
this.camera = new Camera(this.gl, {fov: 35});
this.camera.position.set(0, 1, 5);
this.camera.lookAt([0, 0, 0]);
// SCENE
this.scene = new Transform();
}
createMesh() {
// CREATE OUR MESH
const geometry = new Box(this.gl);
const program = new Program(this.gl, {
vertex: baseVertex,
fragment: baseFragment,
});
this.mesh = new Mesh(this.gl, {geometry, program});
this.mesh.setParent(this.scene);
}
update() {
requestAnimationFrame(this.update);
// RENDER SCENE
this.renderer.render({scene: this.scene, camera: this.camera});
}
resize() {
// SET SIZE AND UPDATE ASPECT RATIO
this.renderer.setSize(window.innerWidth, window.innerHeight);
this.camera.perspective({
aspect: this.gl.canvas.width / this.gl.canvas.height
});
}
}
new GL();
base/vertex.glsl
attribute vec3 position;
attribute vec2 uv;
uniform mat4 modelViewMatrix;
uniform mat4 projectionMatrix;
varying vec2 vUv;
void main() {
vUv = uv;
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.);
}
base/fragment.glsl
precision highp float;
varying vec2 vUv;
void main() {
gl_FragColor = vec4(vUv, 1.0, 1.0);
}
styles.css
* {
margin: 0;
padding: 0;
box-sizing: border-box;
}
html, body {
height: 100%;
width: 100%;
overflow: hidden;
}
1. Environment initialization :
- Renderer creation
- Camera configuration
- Scene preparation.
2. Mesh creation : We define a simple cube as our 3D object, using basic shaders.
3. Render and resize management : A rendering loop is set up, and we scale the canvas to the size of our window.
You should get this :
Creating and Configuring Post-Processing Passes
Let’s move on to the most interesting part of our article, the creation of post-processing. OGL simplifies the job for us by providing a Post class to manage our entire post-processing system.
constructor(){
...
this.createGL();
this.createPost();
...
}
createPost() {
this.post = new Post(this.gl);
console.log(this.post);
}
OGL makes it very easy to add a pass with the addPass method
import passFragment from "./shaders/postprocessing/fragment.glsl?raw";
createPost() {
...
this.mainPass = this.post.addPass({
fragment: passFragment,
uniforms: {
uMouse: {value: [0, 0]},
uTime: {value: 0},
}
})
}
passFragment — To check that our code is working properly, we increase the green color on our rendering.
precision highp float;
varying vec2 vUv;
uniform sampler2D tMap;
void main() {
vec4 original = texture2D(tMap, vUv);
original.g += 0.25;
gl_FragColor = original;
}
Note that OGL already provides us with a vertex shader that transmits vUvs. The texture is also automatically passed to our fragment shader.
Now that we’ve configured our post-processing pass, we’ll modify our update method to pass rendering through the post-processing system.
update() {
...
// RENDER SCENE
// this.renderer.render({scene: this.scene, camera: this.camera});
// RENDER POST
this.post.render({
scene: this.scene,
camera: this.camera,
})
}
You should get something like that :
As you can see from the introductory video, our aim is to create a tracker effect using mouse tracking. We then need to retrieve the mouse coordinates and pass them on to our shader fragment. For a smoother rendering, we add the Lerp function to the mouse coordinates.
const Lerp = (a, b, t) => (1 - t) * a + t * b; // For more fun adding Lerp
...
constructor() {
...
this.createPost();
this.mouse = { // Add mouseCoordinate
x: 0,
y: 0,
lerpX: 0,
lerpY: 0,
}
window.addEventListener('mousemove', this.mouseMove.bind(this), false);
...
}
mouseMove(e) {
// NORMALIZE X & Y COORDINATES
this.mouse.x = (e.clientX / window.innerWidth) * 2 - 1;
this.mouse.y = -(e.clientY / window.innerHeight) * 2 + 1;
}
update() {
...
// LERP X & Y UMOUSE -> pass uniforms
this.mouse.lerpX = Lerp(this.mouse.lerpX, this.mouse.x, 0.025);
this.mouse.lerpY = Lerp(this.mouse.lerpY, this.mouse.y, 0.025);
this.mainPass.uniforms.uMouse.value = [this.mouse.lerpX, this.mouse.lerpY]
this.post.render({...
}
If you have followed these steps correctly, you should have no changes and no errors.
postprocessing/fragment.glsl
...
uniform vec2 uMouse;
void main ..
Shader post-processing
Now that our post-processing system is in place and we have the mouse coordinates, we can have some fun.
Let’s look at the effect we want to achieve :
If we zoom in on the image, we can see how our texture echoes by duplicating and superimposing several versions of this texture with progressive offsets.
To make it clearer, we can represent it as follows :
First, let’s create a vec4, which will contain the final result of the operations we’re about to perform.
...
vec4 original = texture2D(tMap, vUv);
vec4 finalColor = vec4(0.);
Now let’s define the total number of iterations. By the way, let’s also add the mouse offset (we’ll come back to this below).
We define our total number of layers with an integer, which is a constant, as you’ll see later that we’re going to loop, and using integers in loops is preferable.
...
vec2 echoOffset = uMouse * 0.003;
const int echoLayers = 100;
For each iteration we multiply our common echoOffset with the current element, allowing us to create a positional offset.
Same for the alpha value corresponding to transparency, with the use of pow we create a constant degressive and finally we add the result to our final color.
...
float attenuation = .975; // Preferable between 0 and 1
for (int i = 1; i <= echoLayers; i++) {
float layerFactor = float(i);
vec2 currentOffset = echoOffset * layerFactor;
float alpha = pow(attenuation, layerFactor);
finalColor += texture2D(tMap, vUv + currentOffset) * alpha;
}
If you try it like this, your eyes will probably hurt — it’s normal, we’ve just added all the layers, so the light intensity increases. We simply divide the final image by the total number of layers to obtain a normalized result.
Then we can adjust the final color according to the desired result.
finalColor /= float(echoLayers);
finalColor *= 2.5 + original * 0.25; // Adjustment example
gl_FragColor = finalColor;
I’ll leave you to play around with it, try changing the various parameters. This shader is an example of post-processing. It’s pretty simple to understand, but it doesn’t perform very well.
Conclusion
There are lots of different ways of achieving more or less similar results, but doing this kind of post-processing is really not a high-performance thing. You need to keep a close eye on performance when using post-processing.
Github repo : https://github.com/nicolas-giannantonio/blog-postprocessing