Three.JS WebGPURenderer Part 1: Fragment/Vertex Shaders

Christian Helgeson
21 min readAug 15, 2024

--

If you've been judiciously following recent developments in Three.js, you may have found yourself wading into the confusing and uncharted woods of the WebGPURenderer without a mental map for how to deploy any of its new features. Indeed, while the Three.js project has largely moved away from the WebGLRenderer towards this new rendering paradigm, the WebGPURenderer, though capable of meeting most project requirements, technically remains in an unfinished and undocumented state. Avowed Three.js developers seeking to access familiar functionality may instead find themselves confused by unfamiliar terminology, such as Storage Buffer Nodes, automatic PassNode MRTs, Render Bundles, and a vast plethora of other inscrutable yet exciting new capabilities. Further still, while many features of the new renderer have been stable for some time, many others are still in a constant process of improvement, re-development, and tree shaking.

Pictured: Confused Three.js Acolyte undergoing the process of being Tree Shaken.

If you’re finding yourself in a similar place, you’re not alone. Like its namesake before it, the WebGPURenderer has a tendency to stymy beginners working from a dearth of resources, documentation, or charismatically produced Youtube videos. However, I believe that developers with a foothold of knowledge in the mature features of this new system will benefit greatly when the full feature set comes online. And once one masters the fundamentals, the new rendering paradigm proves itself to be both more intuitive and versatile than the previous version of the Three.js API. Thus, as someone who has waded through and cleared these woods myself, I believe I can vouch for my ability to navigate you around some of the same pitfalls I encountered when first experimenting with Nodes, TSL, and all the fancy new features that Three.js now offers.

It’s an older meme, but it checks out sir.

Ultimately, the purpose of these tutorials is to provide a clear and comprehensive introduction to, and help develoers quickly implement, a large subset of Three.js’s latest features. While these tutorials should be useful for advanced developers, they are also intended to be accessible to aspiring developers who have minimal Three.js knowledge. Naturally, this means that the tutorials will include some basic project setup, explain seemingly obvious features, and demonstrate rudimentary effects with established WebGL-centric solutions, which an industrious developer might find more tedious than simply looking through the Three.js demos and figuring things out independently. However, my goal with these initial tutorials is not just to explain the new API but to foster a deeper comprehension of these new systems through play and experimentation. By doing so, I hope to lead beginners towards an underlying understanding of Three.js’ mechanics. After reading these tutorials, one should not only be able to replicate an effect by rote, but also embark on their own exploration of Three.js, even as the API continues to change.

Former student at peace within the flow of Javascript.

In that spirit, we’ll begin our WebGPURenderer journey by using Nodes to create a fragment shader on the surface of a box mesh. From there, we’ll take it a step further by instancing the mesh and applying our newfound understanding of nodes to dynamically transform each instance, resulting in a visually simple, yet pleasing effect.

Part One: Project Setup

Our initial web page setup is very similar to that of the official Three.js Examples, albeit simplified for clarity. For this tutorial, we’ll start by creating a Node project using Vite as our build tool. Our dependencies are listed below…

npm init -y
npm install vite
npm install three
npm install --save-dev vite-plugin-top-level-await

Our project’s package.json file is similarly minimal…

// package.json
{
"name": "01_tsl_basics", // or whatever you prefer
"type": "module",
"main": "index.js",
"scripts": {
"dev": "vite",
"build": "vite build"
},
"dependencies": {
// Ensure three version "^0.167.0" or later
"three": "^0.166.1",
"vite": "^5.3.3"
},
"devDependencies": {
"vite-plugin-top-level-await": "^1.4.1"
}
}

In the package.json, our Vite project requires us to apply a plugin for top-level await statements. The WebGPURenderer makes use of top level await statements to query for WebGPU compatible graphics resources on your computer. Consequently, this plugin will be necessary for our code to function properly. Fortunately, applying the plugin within our configuration file is straight forward. As shown below, we’ll also use our configuration file to define our import map for Vite, specifying that we only want to import from Three.js’s WebGPU build files.

// vite.config.js ( this is all you need to write in this file )

import { defineConfig } from "vite";
import topLevelAwait from "vite-plugin-top-level-await";

export default defineConfig ({
// For issues with the Three.js WebGPU build, refer to this link:
// https://github.com/mrdoob/three.js/pull/28650#issuecomment-2198568721
resolve: {
alias: {
'three/addons': 'three/examples/jsm',
'three/tsl': 'three/webgpu',
'three': 'three/webgpu'
}
}
// Apply the top-level await plugin to our vite.config.js
plugins:[
topLevelAwait({
promiseExportName: "__tla",
promiseImportName: i => `__tla_${i}`
})
],
});

Once the configuration is written, we can start adding code to our project, beginning with our index.html and main.css files. The main.css file is copied directly from the three.js examples directory, while the index.html file is pasted below.

<!-- index.html -->
<!DOCTYPE html>
<html lang="en">
<head>
<title>three.js TSL Tutorial Part 1 </title>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, user-scalable=no, minimum-scale=1.0, maximum-scale=1.0">
<link type="text/css" rel="stylesheet" href="main.css">
</head>
<body>
<script type="module" src="./script.js"></script>
</body>
</html>

With the styling and page layout in place, initialize the Three.js project with a script.js file. The initial Javascript file is a modified version of the Geometry Cube Example from the Three.js website, albeit one where we deploy the OrbitControls add-on to manipulate our camera, and where we substitute the existing renderer with the WebGPURenderer. In the starter file seen below, the program simply creates a scene containing a static cube mesh whose material is a texture map of a crate found at this link. Once downloaded, it can be placed into a ‘textures’ folder at the root of your project, and then imported into Javascript via the Three.TextureLoader.

// script.js
import * as THREE from 'three';
import { OrbitControls } from 'three/examples/jsm/Addons.js';

import GUI from 'three/examples/jsm/libs/lil-gui.module.min.js';

let camera, scene, renderer;
let mesh;

function init() {

// Create a PerspectiveCamera with an FOV of 70.
camera = new THREE.PerspectiveCamera( 70, window.innerWidth / window.innerHeight, 0.1, 100 );
// Set the camera back so it's position does not intersect with the center of the cube mesh
camera.position.z = 2;

scene = new THREE.Scene();

// Access texture via the relative path to the texture's file.
const texture = new THREE.TextureLoader().load( 'textures/crate.gif' );
// Bring the texture into the correct color space.
// Removing this line will make the texture seem desaturated or washed out.
texture.colorSpace = THREE.SRGBColorSpace;

// Apply a texture map to the material.
const material = new THREE.MeshBasicMaterial( { map: texture } );

// Define the geometry of our mesh.
const geometry = new THREE.BoxGeometry(1, 1, 1);

// Create a mesh with the specified geometry and material
mesh = new THREE.Mesh( geometry, material );
scene.add( mesh );
// Rotate the mesh slightly
mesh.rotation.y += Math.PI / 4;

// Create a renderer and set it's animation loop.
renderer = new THREE.WebGPURenderer({ antialias: false })
// Set the renderer's pixel ratio.
renderer.setPixelRatio( window.devicePixelRatio );
// Set size of the renderer to cover the full size of the window.
renderer.setSize( window.innerWidth, window.innerHeight );
// Tell renderer to run the 'animate' function per frame.
renderer.setAnimationLoop( animate );
document.body.appendChild( renderer.domElement );

const controls = new OrbitControls( camera, renderer.domElement );
// Distance is defined as distance away from origin in the z-direction.
controls.minDistance = 1;
controls.maxDistance = 20;

// Define the application's behavior upon window resize.
window.addEventListener( 'resize', onWindowResize );

}

function onWindowResize() {

// Update the camera's aspect ratio and the renderer's size to reflect
// the new screen dimensions upon a browser window resize.
camera.aspect = window.innerWidth / window.innerHeight;
camera.updateProjectionMatrix();
renderer.setSize( window.innerWidth, window.innerHeight );

}

function animate() {
// Render one frame
renderer.render( scene, camera );

}

init();

And voila, we have our scene:

Crates: Perfect for all your crouching and vaulting needs!

Now, our journey with nodes can truly begin.

Chapter Two: Fragment Nodes

Let’s begin by defining what a node is conceptually. Essentially, a node is a placeholder for a unit of GPU computation, a representation of anything from a basic arithmetic operation, a lighting algorithm, a struct, a line of code, a shader, or a series of post-process passes. The specifics of how any given node works aren’t really important to understand. Nonetheless, they are the core building blocks used to write shaders and achieve a wide variety of effects within the WebGPU paradigm of the Three.js API.

Typically, nodes are written and modified either inline or within TSL code blocks. TSL stands for Three Shading Language, an intermediary shader format that translates Three.js nodes into WGSL or GLSL code, depending on whether the WebGPURenderer deploys its WebGPU or WebGL backend.

Wait, the WebGPURenderer has a WebGL backend? Yes, it does! If the renderer detects that the device doesn’t support WebGPU, it will automatically fall back to WebGL, ensuring that projects built with WebGPU in mind can still run on a broader range of devices. TSL isn’t just a simple compatibility layer; it also abstracts much of the setup and syntax needed to deploy shaders. Consequently, unless you need to use specific features not yet supported by the node system, TSL is the recommended and way to write shaders in Three.js.

So how do we write a shader in TSL? Let’s start off with the simplest possible shader we can write, a fragment shader that outputs texture UVs to the surface of a mesh. To modify our mesh’s material using TSL shaders, we’ll need to change it from a MeshBasicMaterial to a MeshBasicNodeMaterial. As you can imagine, the MeshBasicNodeMaterial is just an extension of the feature set provided by its namesake class, allowing its properties to be define by nodes rather than traditional means. Accordingly, this change will not yet alter the visual output of your scene.

// Old version: 
// const material = new THREE.MeshBasicMaterial( { map: texture } );
// New version:
const material = new THREE.MeshBasicNodeMaterial( { map: texture } );

With this new material type, we can manipulate the fragment values the mesh material outputs by modifying the NodeMaterial’s fragmentNode property. First, import the uv() function from the ‘three/tsl’ directory to access a generic UV range. Then, write a TSL function which returns the value of uv(). Finally, assign that TSL function as the value of our material’s fragmentNode property. By assigning this function to fragmentNode, we apply our function as the new fragment shader of the material. Pay special attention to some of the syntax of how a TSL function is created, including the need to call the function as it is defined in order for the shader to properly execute.

import { tslFn, uv } from 'three/tsl'

const material = new THREE.MeshBasicNodeMaterial( { map: texture } );
// TSL code blocks are created within a call to the tslFn() or Fn() function
const returnUV = tslFn( () => {
return uv();
} );
material.fragmentNode = returnUV();

With shorter shaders like these, we can actually simplify our syntax and return a value without explicit function brackets.

// Change from this code.
const returnUV = Fn( () => {
return uv();
} );
material.fragmentNode = returnUV();

// ...to this code.
material.fragmentNode = uv();

Whenever possible, try to inline node operations to make them concise and readable.

// When possible, prefer this...
material.fragmentNode = uv().distance(vec2(0.5, 0)).oneMinus().mul(3);

// ...over this.
const fragmentShaderTSL = Fn(() => {
const uvNode = uv();
const uvDistance = uvNode.distance(vec2(0.5, 0));
const scaledDistance = distance.oneMinus().mul(3);
})

material.fragmentNode = FragmentShaderTSL();

Once we’ve applied this function to fragmentNode, our mesh’s surfaces will display uvs in the range of 0 to 1, overriding our material’s existing texture property.

The shader “Hello World”

Hmm, while these UVs are useful, they aren’t as visually interesting as our existing texture. What if we reapplied our crate texture to the surface of the material, only this time using the fragmentNode? To achieve this, we can import the texture() function from ‘three/tsl’, which will convert our existing texture into a TextureNode, allowing us to use it within our fragment shader.

import { Fn, uv, texture } from 'three/tsl'

// Rename texture so its identifier doesn't conflict with the texture() function.
const crateTexture = new THREE.TextureLoader().load( 'textures/crate.gif' );

// Remove texture argument from material constructor
const material = new THREE.MeshBasicNodeMaterial();

// Read texture values in fragment shader.
material.fragmentNode = texture( crateTexture );

From here, we can apply various modifications to the output of our fragmentNode, including dynamically adjusting the texture's position based on the elapsed time of our application. Three.js provides four distinct timer nodes that can be used as uniforms within our TSL shader code. timerGlobal and timerLocal both represent elapsed time: timerGlobal tracks the time since the application started, while timerLocal tracks the time since the creation of the timer itself within the application. Additionally, timerDelta holds the elapsed time between the previous frame and the current frame, and timerFrame passes the current frame's ID. For our purposes, we want a simple variable that accumulates time, so we'll use timerLocal. By incorporating timerLocal, we can offset the UVs on our surface to create a scrolling texture effect."

// 
import { Fn, uv, texture, timerLocal, negate, vec2 } from 'three/tsl'

// Our texture will repeat even as we move uvs out of bounds
crateTexture.wrapS = THREE.RepeatWrapping;
crateTexture.wrapT = THREE.RepeatWrapping;

// We directly specify the coordinates we use to sample from a texture
// by providing a uv node as a second argument.
material.fragmentNode = texture( crateTexture, uv().add( vec2( timerLocal(), negate( timerLocal() ) ) );
A crate result!

That’s certainly a more dynamic result than our UVs! However, due to the nature of our fragment shader, the edges of our mesh are poorly defined. Although our mesh surfaces are dynamic, the mesh lacks any dimensionality without proper lighting. So, why not add some lights? Let’s begin with some basic lighting — nothing too fancy or chic, just a key light and a fill light. Then, we’ll convert our mesh’s material from a MeshBasicNodeMaterial to a material type that will react to our lighting.

// Add the key directional light to our scene
const directionalLight = new THREE.DirectionalLight(0xffffff, 1);
directionalLight.position.set(5, 3, -7.5);
scene.add(directionalLight);

// Add a fill light of lower intensity (0.3) pointing in the opposite direction
const fillLight = new THREE.DirectionalLight(0xffffff, 0.3);
fillLight.position.set(-5, 3, 3.5);
scene.add(fillLight);

// Convert our material from MeshBasicNodeMaterial to MeshStandardNodeMaterial
// MeshStandardNodeMaterial will integrate lighting information into the
// material's output.
const material = new THREE.MeshStandardNodeMaterial();

Surely these changes will give us the lighting we desire, right?

Nothings….happening.

Though there’s no lighting, this is actually the correct behavior given the code we’ve already written. There’s an important attribute of a material’s fragmentNode I’ve neglected to mention for the sake of example. In any instance where you apply a shader to the fragmentNode of your material, that shader will completely override the output fragment value of your mesh. It will not matter whether your scene has a complex lighting setup. It will not matter whether you have chosen a material that can properly receive light and shadow. Regardless of whatever elements you’ve added to your scene, the code of a material’s fragmentNode will completely override that material’s default fragment output.

However, while a material’s fragmentNode will ignore the material’s internal shader logic and it’s interaction with outside elements, a material’s colorNode will not. The colorNode acts like it sounds, only modifying the base color value that a surface outputs wihtout affecting how that base color is affected by the larger lighting hierarchy of your scene. Therefore, if you want your mesh to properly integrate into your scene’s existing lighting setup, simply move your existing fragment shader from the fragmentNode to the colorNode.

// material.fragmentNode = texture( crateTexture, uv().add( vec2( timerLocal(), Tnegate( timerLocal()) ) ));
material.colorNode = texture( crateTexture, uv().add( vec2( timerLocal(), negate( timerLocal()) ) ));
Roger Deakins, eat your heart out.

Part Three: Vertex Shader Setup

Now that we have a little bit of familiarity with the material system and TSL shaders, let’s create something a bit more exciting. We’ll stick with our basic cube and two lights, but leverage the power of vertex shaders to transform this basic cube into a dazzling dance of colorful, rotating cubes.

To give ourselves some motivation, let’s look at the effect our TSL vertex shader will produce:

Footlight Cuberade

To begin, we need to define some constants, such as how many concentric circles we want to create, and how many cubes we want in our scene. Populate your scene with eighty cube mesh instances placed within four concentric circles. For organizational purposes, place the code below above your material code, as you’ll need to use these constants within our material’s position shader.


// The number of cube mesh instances to be created
const instanceCount = 80;
// The number of concentric circles in the scene
const numCircles = 4;
// Divide the number of instances equally amongst the circles
const meshesPerCircle = instanceCount / numCircles

const material = new THREE.MeshStandardNodeMaterial();

Next, shrink the scale of the cube geometry, remove the mesh’s default rotation, and switch the mesh over from a standard Mesh to an InstancedMesh. Instanced meshes leverage specific functionalities within the WebGPU graphics API that allows applications to draw multiple instances of the same mesh in a single draw call, improving overall rendering performance.

// const geometry = new THREE.BoxGeometry( 1, 1, 1);
// mesh = new THREE.Mesh( geometry, material );
const geometry = new Three.BoxGeometry( 0.1, 0.1, 0.1 );
mesh = new THREE.InstancedMesh( geometry, material, instanceCount );
// mesh.rotation.y += MATH.PI / 4;

Finally, we’ll import all the requisite Node functionality we need from the Three.js library into our project, create a set of uniforms, and make those uniforms accessible to the GUI.

import * as THREE from 'three';
// All node functionality necessary to create the effects present in this tutorial
import { positionGeometry, cameraProjectionMatrix, modelViewProjection, modelScale, positionView, modelViewMatrix, storage, attribute, float, timerLocal, uniform, tslFn, vec3, vec4, rotate, PI2, sin, cos, instanceIndex, negate, texture, uv, vec2, positionLocal, int } from 'three/tsl';
import { OrbitControls } from 'three/addons/controls/OrbitControls.js';

// Import GUI to control the value of our uniforms
import GUI from 'three/addons/libs/lil-gui.module.min.js';

function init() {

// camera, scene, lights, crate texture code

// Unlike the old shader system, uniforms only have to be defined once,
// and then can be used anywhere as they are.
const effectController = {
// uniform() function creates a UniformNode that holds a uniform value
uCircleRadius: uniform( 1.0 ),
uCircleSpeed: uniform( 0.5 ),
uSeparationStart: uniform( 1.0 ),
uSeparationEnd: uniform( 2.0 ),
uCircleBounce: uniform( 0.02 ),
};

// ...

// The value of a UniformNode is contained in it's 'value' property.
// Therefore, the GUI modifies this property and not the UniformNode as a whole.
const gui = new GUI();
gui.add( effectController.uCircleRadius, 'value', 0.1, 3.0, 0.1 ).name( 'Circle Radius' );
gui.add( effectController.uCircleSpeed, 'value', 0.1, 3.0, 0.1 ).name( 'Circle Speed' );
gui.add( effectController.uSeparationStart, 'value', 0.5, 4, 0.1 ).name( 'Separation Start' );
gui.add( effectController.uSeparationEnd, 'value', 1.0, 5.0, 0.1 ).name( 'Separation End' );
gui.add( effectController.uCircleBounce, 'value', 0.01, 0.2, 0.001 ).name( 'Circle Bounce' );

}

With all those changes, our scene should look just like it does below, with our previous fragment shader still operating on our much smaller cube.

Tiny baby cube.

Part Four: Writing the Vertex Shader

When using the Three.js WebGPURenderer, users can apply a vertex shader to a mesh’s NodeMaterial by assigning a TSL function to the material’s positionNode or vertexNode property. Writing a TSL function for positionNode will still respect the standard model-view-projection (MVP) transformations that are already applied to your mesh. Essentially, this means that complex transformations to your mesh’s vertices or your mesh’s overall position can be executed similarly to how they are executed in Javascript. Moreover, since these operations will be executed in parallel within the material’s vertex shader, they will be much more performant than equivalent CPU operations.

When using vertexNode, the behavior of a function is slightly different. This node will output the raw value returned by a TSL function directly to the vertex shader, bypassing the standard MVP transformation. Therefore, if you assign a TSL function to vertexNode, you must manually apply the MVP transformation within that function.

To demonstrate the difference, I’ve written two functions below, one for the vertexNode and one for the positionNode. Each shader performs the same operation: moving the mesh’s position along the x-axis.


const material = new THREE.MeshStandardNodeMaterial();

// The positionNode shader and vertexNode shader move the mesh in the same way.
// When testing the functionality of each shader, be sure to comment out the
// one you are not using.

// Position Node Approach
material.positionNode = tslFn(() => {
// In a positionNode, acts as the position of the vertex as defined in world space.
const position = positionLocal;

// Oscillate back and forth along the x-axis
const moveX = sin( timerLocal() );

// Equivalent of mesh.position.x += Math.sin(time) in plain Javascript
position.x.addAssign( moveX );

return positionLocal;

})();

// Vertex Node Approach
material.vertexNode = tslFn(() => {

// In a vertexNode, acts as geometry positions of your mesh's cube.
const position = positionLocal;

position.x.addAssign( sin( timerLocal() ) );

// Need to apply transformation matrices for output to be the same
return cameraProjectionMatrix.mul( modelViewMatrix ).mul( position );

})();
Whee!

Note how in both shaders, we use positionLocal to acccess the vertices of our mesh in local space. While we Three.js, there are multiple convenience properties that access pre-transformed versions of your mesh vertices including

  • positionWorld: The position of a mesh geometry transformed by the modelWorldMatrix, which scales, rotates, and translates your mesh vertices.
  • positionView: The position of a mesh geometry transformed by the modelViewMatrix, which brings your mesh into viewSpace.
  • modelViewProjection: Performs the standard MVP transformation on the position passed as an argument.

With these additional nodes, we could modify our vertexNode shader to output the vertices of the mesh in myriad, without any changes to the shader’s visual output.

// Vertex Node Approachs for plainly rendering our mesh
material.vertexNode = tslFn(() => {
// Approach 1
return cameraProjectionMatrix.mul( modelViewMatrix ).mul( positionLocal );
// Approach 2
return cameraProjectionMatrix.mul( positionWorld );
// Approach 3
return modelViewProjection( positionLocal );
})();

Since we don’t want to mess with the standard MVP projection process, we’ll write our vertex shader for the material’s positionNode. Let’s start by first deleting any of the example position or vertex shaders we’ve created, and creating a new shader that we’ll later assign to our positionNode. Within this shader, let’s extract our uniforms and create some variables that we’ll use repeatedly across our shader.

const positionTSL = tslFn(() => {

// Uniforms can be destructued since they are just object variables in Javascript that represent uniforms!
const { uCircleRadius, uCircleSpeed, uSeparationStart, uSeparationEnd, uCircleBounce } = effectController;

// Access the time elapsed since shader creation.
const time = timerLocal();
const circleSpeed = time.mul( uCircleSpeed );

})

We’ll then want to access some instance data within our position shader to properly coordinate the movement of each instance of our cube mesh. This means we’ll have to access the instance index that the current vertex belongs to. To do that, all we have to do is access the instanceIndex value we imported earlier. Within a function block assigned to either positionNode or vertexNode, the instanceIndex value will represent the index of the mesh instance that the current vertex belongs to. If you’re wondering why I explicitly make the distinction that this is its value within the context of a vertex shader, that’s because instanceIndex is a contextual node whose value changes depending on the context in which it is used. While it’s not important to know at the moment, other values that instanceIndex can represent will become essential down the road. For now, let’s continue by adding instanceIndex to our position shader, and deriving other indices from it’s value.

const positionTSL = tslFn(() => {
const { uCircleRadius, uCircleSpeed, uSeparationStart, uSeparationEnd, uCircleBounce } = effectController;
const time = timerLocal();
const circleSpeed = time.mul( uCircleSpeed );

// Index of a cube within its respective concentric circle.
// NOTE: instanceWithinCircle uses 0-based indexing.
const instanceWithinCircle = instanceIndex.remainder( meshesPerCircle );

// Index of the circle that the cube mesh belongs to.
// NOTE: circleIndex uses 1-based indexing.
const circleIndex = instanceIndex.div( meshesPerCircle ).add( 1 );

// Example Values when meshesPerCircle === 20
// instanceIndex: 0 ---> instance is cube 0 of circle 1.
// instanceIndex: 16 --> instance is cube 16 of circle 1.
// instanceIndex: 22 --> instance is cube 2 of circle 1.
// instanceIndex: 47 --> instance is cube 7 of circle 2.
})

With these indices present, we’ll use them to separate and offset each mesh instance depending on these values. Below, we’ll add a short piece of code to our function to demonstrate that these values work, though this will not be our final effect.

const positionTSL = tslFn(() => {
// ...
const newPosition = positionLocal;

// Bring instanceWithinCircle of range [0, meshesPerCircle) into range [-1, 1)
const range = float( instanceWithinCircle ).sub( meshesPerCircle / 2 ).div( meshesPerCircle / 2 )

// Offset mesh x.
newPosition.x.addAssign( range.mul( 2 ) );

// Offset mesh y by circleIndex
newPosition.y.addAssign( int(circleIndex).sub( 2 ) );
})

material.positionNode = positionTSL();

With a successful demonstration showing we can manipulate our instances separately, it’s time to complete the shader. This following section will be heavy on code and light on text, but don’t worry, there will be plenty of code comments to help you follow along.

First, set the position of the scene’s perspective camera back 15 units. Then, delete the sample code in the block above, and replace it with this line, which returns either a negative or positive value depending on the number’s parity.

// Circle Index Even = 1, Circle Index Odd = -1.
// Examples:
// 0 -> 0 % 2 = 0 * 2 = 0 - 1 = -1
// 3 -> 3 % 2 = 1 * 2 = 2 - 1 = 1
const evenOdd = circleIndex.remainder( 2 ).mul( 2 ).oneMinus();

Next, create a variable representing the radius of one of the concentric circles. As circleIndex increases, the radius of each successive circle will also increase. The extent to which it increases is driven by the circle radius uniform we previously created and applied to our GUI.

// Increase radius when we enter the next circle.
const circleRadius = uCircleRadius.mul( circleIndex );

We now need to move each instance of the cube mesh into it’s respective place. To do this, calculate each possible angle from the cube’s center to its circumference and move the cube along that angle into it’s circle. Additionally, we’ll scale the cubes in the outer circles such that they successively grow bigger than the cubes in the inner circles.

 material.positionNode = Fn(() => {
// ...

// Bring instanceWithinCircle to range [0, 2*PI] to get 'meshesPerCircle'
// number of angles from the origin to the circle perimeter.
const angle = float( instanceWithinCircle ).div( meshesPerCircle ).mul( PI2 ).add( circleSpeed );

// The radius of a circle is the distance from the center of a circle,
// located at the origin, to it's edge. All we have to do is scale
// the x and y direction of our angle by this radius to place our
// mesh instances along our circle's circumference.

// Rotate even and odd circles in opposite directions.
const circleX = sin( angle ).mul( circleRadius ).mul( evenOdd );
const circleY = cos( angle ).mul( circleRadius );

// Scale cubes in later concentric circles to be larger.
const scalePosition = positionLocal.mul( circleIndex );

const newPosition = scalePosition.add( vec3( circleX, circleY, 0.0 ));
return newPosition;
})();
Output #1

Hmm, we don’t really get a sense that our meshes are three-dimensional cubes and not two-dimensional planes. After the scale operation, let’s rotate each individual cube over time to reveal the mesh’s true nature.

// Scale cubes in later concentric circles to be larger.
const scalePosition = positionLocal.mul( circleIndex );

// Rotate the individual cubes that form the concentric circles.
const rotatePosition = rotate( scalePosition, vec3( time, time, time ) );

const newPosition = rotatePosition.add( vec3( circleX, circleY, 0.0 ) );
Output #2

Perfect. Now, we can finish polishing our position shader by adding additional offsets to the position of each cube.

// Final Postion Shader

const positionTSL = tslFn(() => {
const { uCircleRadius, uCircleSpeed, uSeparationStart, uSeparationEnd, uCircleBounce } = effectController;
const time = timerLocal();
const circleSpeed = time.mul( uCircleSpeed );

const instanceWithinCircle = instanceIndex.remainder( meshesPerCircle );
const circleIndex = instanceIndex.div( meshesPerCircle ).add( 1 );
const evenOdd = circleIndex.remainder( 2 ).mul( 2 ).oneMinus();

const circleRadius = uCircleRadius.mul( circleIndex );
const angle = float( instanceWithinCircle ).div( meshesPerCircle ).mul( PI2 ).add( circleSpeed );
const circleX = sin( angle ).mul( circleRadius ).mul( evenOdd );
const circleY = cos( angle ).mul( circleRadius );

const scalePosition = positionLocal.mul( circleIndex );
const rotatePosition = rotate( scalePosition, vec3( time, time, time ) );

// Control how much the circles bounce vertically.
// The scale of the bounce is determined by the circle bounce uniform.
// We scale the time by 10 to make the bounce go faster
const bounceOffset = cos( time.mul( 10 ) ).mul( uCircleBounce );

// Bounce odd and even circles in opposite directions.
const bounce = circleIndex.remainder( 2 ).equal( 0 ).cond( bounceOffset, negate( bounceOffset ) );

// Distance between minimumn and maximumn z-distance between circles.
// Separation Start represents the minimumn amount by which each circle
// is offset from another circle in the z-direction. Separation End
// represents the maximum value by which the circles are offset in
// the z-direction.
const separationDistance = uSeparationEnd.sub( uSeparationStart );

// Move sin into range of 0 to 1.
const sinRange = ( sin( time ).add( 1 ) ).mul( 0.5 );

// Make circle separation oscillate in a range of separationStart to separationEnd
const separation = uSeparationStart.add( sinRange.mul( separationDistance ) );

// Y pos offset by bounce. Z-distance from the origin increases with each circle.
const newPosition = rotatePosition.add( vec3( circleX, circleY.add( bounce ), float( circleIndex ).mul( separation ) ) );
return newPosition;

});

material.positionNode = positionTSL();
Final vertex output!

All that’s left to do now is apply randomized colors to each instance of our mesh, and our effect is complete!

material.positionNode = positionTSL();
//material.colorNode = texture( crateTexture, uv().add( vec2( timerLocal(), negate( timerLocal()) ) ));
const r = sin( timerLocal().add( instanceIndex ) );
const g = cos( timerLocal().add( instanceIndex ) );
const b = sin( timerLocal() );
material.fragmentNode = vec4( r, g, b, 1.0 );
Footlight Cuberade

Part Five: Conclusion

And there you have it, you’ve just taken your first steps into the world of the WebGPURenderer. If you’d like to see the final code for this tutorial, it can be found at this project’s Github repository. Feel free to comment on the tutorial, or message me on Github or the Three.js Forums if you have any questions or suggestions.

In the next tutorial, we’ll explore the new compute capabilities of Three.js, writing a compute shader that computes the velocity of multiple particles in parallel. Hope to see you there!

--

--