WebGPU — Tutorial 1: “Hello World!”

Julien Etienne
7 min readJan 9, 2024
We are not building anything like this what so ever :)

What is WebGPU and Why

WebGPU is the successor to WebGL and WebGL 2.0. These technologies allow developers to harness the user’s GPU for improved performance when rendering graphics or performing general-purpose computing, e.g., AI/ML.

It is heavily influenced by Vulkan, a cross-platform, low-level graphics, and compute API.

Leveraging influences from Vulkan, WebGPU addresses performance and compatibility issues present in WebGL, particularly among various GPU hardware. This API sits on top of Vulkan, DirectX, or Metal, depending on the device support.

The performance is said to be not too far off from native performance. This opens up a world of new applications and industries that were not as feasible in the past decade with previous HTML5 technologies.

Support

At the time of this article WebGPU is not consistently ready for production, but this won’t be for long. Currently, you can utilize flags or canary releases where possible. (Linux support is somewhat limited at the time of this article).

What are we doing?

A triangle serves as the “Hello World” in graphics programming. This article is a basic example to get you started, though without long-winded explanations. For more in-depth insights, explore MDN or inquire with your preferred AI chatbot or whatever.

Requirements

Being familiar with any programming language will be beneficial. You don’t need advanced JavaScript skills, but you should be able to follow the steps to set up a minimal development environment.

First make sure you can see WebGPU working in your browser if not download a browser that supports WebGPU or enable the necessary flags.

I’m using VSCode, but you can use anything.

Setup

We are using JavaScript; this tutorial can easily be converted to TypeScript if that’s your preference. We are using native modules because we want the simplicity of top-level await. So, do this first:

  • If you don’t have node installed, install nvm then install node.js using nvm.
  • Install http-server globally using npm i -g http-server`
  • Make a directory called webgpu-hello-world
  • cd into that directory and create index.html script.js and shaders.wgsl.js
  • If you’re using VSCode, install the WGSL Literal syntax highlighter.

index.html

Copy and paste the following HTML. Using type="module" enables top-level await and the importing of the shaders file. The canvas is a square, with its length set to 50% of the display height.

<!DOCTYPE html>
<html lang="en">

<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>WebGPU - Hello World!</title>
<style>
/* Some boilerplate styling */
body {
overflow: hidden;
background: #222;
display: grid;
place-items: center;
height: 100dvh;
}

canvas {
width: 50dvh;
height: 50dvh;
}
</style>
</head>

<body>
<canvas widht="500" height="500"></canvas>
<script type="module" src="./script.js"></script>
</body>

</html>

script.js pt 1

The first thing we will do is import the shaders. This file will contain the vertex and fragment shaders we will use to draw the triangle.

import code from './shaders.wgsl.js'

requestAdapter() requests access to the GPU. As you may notice, it’s promise based hence the await . We use the optional chaining operator ?. in case gpu is empty and the adapter is empty both requestAdapter and requestDevice will return a non-value without throwing a type error.

requestDevice returns an object, GPUDevice that gives direct access to the GPU. The catch is for OperationError by requestDevice.

let adapter
let device

try {
adapter = await navigator.gpu?.requestAdapter()
device = await adapter?.requestDevice()
if (!device) console.error('Webgpu is not supported')
} catch (e) {
console.error(e)
}

Next, we obtain the canvas element and its WebGPU context for the GPU to understand its output. We also need the preferred canvas format, ensuring the optimal pixel format for the underlying graphics API. This format defines how red, green, blue, and alpha values are stored and interpreted in a pixel.

const context = document.querySelector('canvas')?.getContext('webgpu')
const format = navigator.gpu.getPreferredCanvasFormat()

We then supply this information to the swap chain by configuring the context using the GPU device and the pixel format.

context.configure({
device,
format,
})

shaders.wgsl.js pt 1

We are going to setup the file by activating the WGSL Literal syntax highlighter just before our template literal string ``.


export default /* wgsl */ ``

Between the ticks, add the @vertexand @fragment shaders.

  1. @vertex fn: Declares a vertex function. It processes individual vertices in a graphics pipeline during rendering.
  2. @builtin(vertex_index) i: u32: Declares a built-in input representing the vertex index, an identifier for the current vertex.
  3. -> @builtin(position) vec4f: Specifies the output type (vec4f) and the built-in semantic (position) for the vertex function.
  4. return vec4(0);: Returns a 4D vector with all components set to 0.

export default /* wgsl */ `

@vertex fn vs (@builtin(vertex_index) i : u32) -> @builtin(position) vec4f {
return vec4(0);
}

@fragment fn fs() -> @location(0) vec4<f32> {
return vec4(0);
}
`
  1. @fragment fn: Declares a fragment function, which represents the colour of a pixel during rendering.
  2. fs() -> @location(0) vec4<f32>: Specifies the function signature, it returns a 4D vector of 32-bit floating-point values.
  3. return vec4(1.0);: Returns a 4D vector with all components set to 0.

This is just an empty setup, we will now jump back to script.js

script.js pt 2

WGSL is a high-level language for humans, we need to compile the shaders into a binary representation for the render-pipeline. This representation is called a shader-module.

const module = device.createShaderModule({
label: 'Triangle veretex and fragment shaders',
code
})

In the shaders file, notice how WGSL contained multiple shaders, denoted by the @ symbol. The module returns those shaders as defined.

Now that we have our shader-module we can now create the render pipeline.

const pipeline = device.createRenderPipeline({
label: 'Triangle pipeline',
layout: 'auto',
vertex: {
module,
entryPoint: 'vs',
},
fragment: {
module,
entryPoint: 'fs',
targets: [{ format }],
},
})

Notice how we reference the functions within our vertex and fragment shaders, vs and fs . We also pass in our preferred format as the target. Auto-layout automatically generates the render pipeline layout which regards the organisation and binding of buffers, textures and other resources.

Now we need to instruct WebGPU on rendering our shader using a render-pass descriptor. The clearValue represents the initial state and the value to clear our target. Below, we’re using Red: 0.1, Green: 0.8, Blue: 1.0, and Alpha: 1.0, creating a blue-ish color.

  • loadOp: Action before rendering (clear)
  • storeOp: Action after rendering (store)

So on a render pass it will clear using the clearValue, then store the rendered value so we can preserve the change.

const renderPassDesc = {
label: 'RenderPass Descriptior',
colorAttachments: [{
clearValue: [0.1, 0.8, 1, 1],
loadOp: 'clear',
storeOp: 'store',
}]
}

We still can’t see anything, but we will soon see a blue blackground after submitting the command-buffer.

context.getCurrentTexture().createView() creates a gpu-view for the canvas to access it without conflicts.

The command-encoder allows the CPU to record commands for the GPU. The render-pass is created with the instructions from the render-pass-descriptor.

We feed the render pass the shader-module which contains our vertex and fragment shaders by setting the pipeline.

pass.draw(3) will draw three times for each coordinate of our triangle then we pass.end() as that’s the end of the pass.

The encoder records all these commands we set up using the CPU that instructs the GPU to execute specific operations. The command buffer is returned by encoder.finish() and submitted to the device queue to be actioned efficiently.

const renderer = () => {
renderPassDesc.colorAttachments[0].view = context.getCurrentTexture().createView()

const encoder = device.createCommandEncoder({ label: 'The encoder' })
const pass = encoder.beginRenderPass(renderPassDesc)

pass.setPipeline(pipeline)
pass.draw(3)
pass.end()

device.queue.submit([encoder.finish()])
}

renderer()

We should now see something like this:

This shows that our clearView value is working as the initial value. We need to fix our shaders to render the triangle.

shaders.wgsl.js pt 2

export default /* wgsl */ `

@vertex fn vs (
@builtin(i) vertexIndex : u32
) -> @builtin(position) vec4f {
let pos = array(
vec2f(0.0, 0.5), // Top center vertex
vec2f(-0.5, -0.5), // Bottom left vertex
vec2f(0.5, -0.5), // Bottom right vertex
);

return vec4(pos[i], 0.0, 1.0);
}

@fragment fn fs() -> @location(0) vec4f {
return vec4f(1.0, 0.98, 0.0, 1.0);
}
`

Make the above changes to the shaders file. We have now created an array of 3 vec2fvalues (vectors with 2 floats). This should be fairly easy to understand since each vertex of a triangle has two coordinates.

In the return statement we expect a vec4f. Think of pos[i] like looping though a JavaScript array or list/ slice of another programming language. Each render-pass draw will cycle through each pair of vec2f coordinates to produce the triangle.

The 3rd position of the vec4 return value is 0 since we are only drawing in 2D space. The 4th position determines the size from the centre, change it to alter the size of the triangle.

In the fragment shader we only affect the drawn area by the vertex shader. We have provided an opaque yellow color-value 1.0, 0.98, 0.0, 1.0.

Now you should see a triangle:

This is the equivalent of “hello world”. If you just want to build cool things in WebGPU consider:

You will need to dive deeper for more advanced usage. This is just a small introduction.

--

--