Neon: A WebGL Installation

Active Theory
Active Theory Case Studies
6 min readDec 4, 2017

The best part of 3D graphics development is that there is an endless amount of things to learn. Every new technique is a rabbit hole of discovery where the new builds upon the old and new effects are created. It truly feels like imagination is the only limiting factor.

At the heart of every Active Theory project is a passion for creative technology so it’s important to balance client work with experimentation. We had been working on two new techniques that came together and inspired us to build an interactive installation of neon tubes.

The build target was a high-end graphics PC powering a projector at our year-end holiday party. We created and iterated quickly in our Hydra framework compiled to a Windows desktop app with Electron which allows for native integration with a Kinect.

The concept for the installation was simple. Create visually interesting graphics in the form of neon tubes that are generated by user input to be projected at our event with a small twist; the ambient mode visuals when no one was interacting via Kinect would be controlled by our friends from around the world who were playing with the same code on a web link.

A full HD loop of the projector output

The Tech

GPU based particle systems are powerful, allowing large numbers of particles to animate via complex math to produce stunning effects. The tradeoff is that these systems are typically initialized and then run as a simulation; it’s difficult to control the lifecycle of an individual particle.

On the other hand, CPU based systems allow maximum flexibility but are much more limited in the number of particles and the amount of math that can be processed.

We built a layer on top of our existing CPU and GPU systems to combine them to give us the best of both; control over the spawn point and life of a particle as well as the sheer computational power of the GPU. While the system is complicated, the idea is more straightforward.

At any time in JavaScript, we can spawn a particle with a single line of code:

system.release(new Vector3(x, y, z))

Underneath the abstraction, this gets pushed to a WebWorker thread where our CPU-based ParticleEngine system is running. ParticleEngine allows particles to be spawned and applies behaviors throughout their life cycle. In our case, we only need to keep track of the original position where a particle was spawned and a value decreasing from 1 to 0 representing the life of that particle. It looks something like this:

applyBehavior = function(p) {
//*** Wait two frames before starting to decrease life. This gives the GPU a frame to see the initial state and set it's position.
++p.activeFrame;
if (p.activeFrame > 2) {
p.activeValue = 1;
p.life -= DECAY * _this.decay;
}

if (p.life <= 0) {
//Reset the particle
}
}

In the code above you’ll notice a property called activeValue, which represents the state of the particle. The activeValue tells the GPU the state of the system, such as during the first frame of initialization or after the end of its life.

This data is streamed back to the main thread with transferable objects in the form of a Float32Array that updates the geometry of the GPU system that actually calculates the positions of the particles.

function bufferDownload(key, value) {
if (key == 'active') {
let active = _attributes.active.buffer;
let count = value.length / 3;
for (let i = 0; i < count; i++) {
let texIndex = i * _segments;
let x = value[i * 4 + 0];
let y = value[i * 4 + 1];
let z = value[i * 4 + 2];
let a = value[i * 4 + 3];

if ("The system is in the initialization state") {
active[texIndex * 4 + 0] = x;
active[texIndex * 4 + 1] = y;
}

if ("The particle is alive") {
active[texIndex * 4 + 2] = z;
}

active[texIndex * 4 + 3] = a; //Always set the life property in the w value of the vec4
}

_attributes.active.needsUpdate = true;
}
}

On the GPU, this data is read and utilized:

if ("The particle is inactive") {
gl_FragColor = pos;
return;
}

if ("The system is in the initialization state") {
pos.xyz = activePos.xyz; //activePos is the origin of the particle
gl_FragColor = pos;
return;
}

if ("The particle is alive") {
//Move the pos here
}

This system describes how the “head” of the neon tube moves and animates throughout it’s life. But what about the rest of the tube?

The resulting GPGPU texture from the code above is passed in to an Instanced Buffer Geometry mesh which actually renders the tubes. This one is even more complicated, but the underlying idea is that within the GPGPU texture each pixel, representing a particle, knows where it lives within a chain of other particles, comprising the tube.

This image illustrates the GPU texture where each color is a new tube and the number within each pixel knows where it fits within the tube. When a particle is activated, the pixel at index 0 is updated with the origin and lifecycle data from above. The other pixels are processed as “followers” who are trying to catch up the one before it.

void main() {    float CHAIN = "Color from the image above";
float LINE = "Number from the image above";

if ("Head of the chain") {
//Move the particle when appropriate
} else {
vec3 followPos = "Data from the pixel one to the left"

float headIndex = getIndex(LINE, 0.0); //Find the head of the chain

if ("The head is not active") {
gl_FragColor = pos;
return;
}

if ("The head is being initialized") {
pos.xyz = HEAD_POSITION.xyz;
gl_FragColor = pos;
return;
}

pos.xyz += (followPos - pos.xyz) * lerp; //The particle is alive and is always trying to reach the position of the particle ahead of it

}

gl_FragColor = pos;
}

The Visuals

Knowing we would be rendering on a dedicated graphics machine with a Titan X graphics card, we cranked up the visuals in a way we normally are unable to when targeting mobile devices.

Starting with a base building facade geometry, UVs are generated from the normals and the lighting is actually just sampling a basic texture using those UVs.

vec2 getUV(vec3 dNormal, float start, float end) {
vec2 uv0 = vec2(0.0); //start and end are the world space dimensions of the geometry
uv0.x = range(vPos.z, -5.0, 5.0, start, end);
uv0.y = range(vPos.y, 0.0, 20.0, start, end);

vec2 uv1 = vec2(0.0);
uv1.x = range(vPos.x, -5.0, 5.0, start, end);
uv1.y = range(vPos.z, -5.0, 5.0, start, end);

vec2 uv2 = vec2(0.0);
uv2.x = range(vPos.x, -5.0, 5.0, start, end);
uv2.y = range(vPos.y, 0.0, 20.0, start, end);

vec3 n = normalize(dNormal);
//choose from the above uvs depending on the direction of the normal. Use functions to avoid if statements!
vec2 uv = mix(uv0, uv1, min(when_gt(abs(n.y), abs(n.x)), when_gt(abs(n.y), abs(n.z))));
uv = mix(uv, uv2, min(when_gt(abs(n.z), abs(n.x)), when_gt(abs(n.z), abs(n.y))));

return uv;
}
The base lighting just comes directly from sampling the texture in the top right

From there, the tubes are isolated and rendered to their own RenderTarget which is blurred. That blur is added back in the compositional pass to make the basic lines in the image above look like neon tubes.

The depth buffer is used to create a focal depth of field blur and the final pass treats the scene with a color mask that is blended with a linear burn to create the final image.

vec3 blendLinearBurn(vec3 base, vec3 blend) {
return max(base + blend - 1.0, 0.0);
}

void tint(inout vec4 texel) {
float noise = range(snoise(vUv + (uTime*0.05)), -1.0, 1.0, 0.0, 1.0);
vec3 color = mix(uColor0, uColor1, noise);
texel = mix(texel, vec4(blendLinearBurn(texel.rgb, color), 1.0), uColorBlend);
}

Small details put the finishing touches on the render — a vignette and basic noise, light RGB shifted chromatic aberration, and a mirror reflection on the ground where the texture lookup is distorted with a normal map.

Each pass coming together to comprise the output

--

--