Experimenting with WebGL and Vue: How to animate subtitles?

Max Savonin
Vue.js Developers
Published in
9 min readOct 17, 2019

Imagine that you sing karaoke or watch a movie with subtitles. Forget boring fonts with no animation. Let’s revive these with Vue.js development!

There is a very interesting technology, named WebGL, which is used to render 2D/3D inside a browser using JS API. It gives a possibility to create really beautiful and powerful things. I use three.js to simplify interaction with WebGL and Vue with vue serve to create a simple prototype. I have decided to use Vue in case I need additional controls, and vue serve has been chosen to simplify prototyping. Also, it is good to use npm modules, and it is one of the simplest ways to integrate bundling. Meanwhile, I do not care much about the code style but focus on testing comfort. At this point, the code is not resource-efficient and requires optimization.

So, let’s start coding!

Libraries

First of all, let’s describe libs I use here. I use root.js and orbit.controls.js to simplify scene creation and hold all the pieces together. I also use bas.js to move most parts of calculations to shaders so that I could use GPU instead of CPU to remove some bottlenecks. The second lib that I use is three.extensions.js, and it features some functions needed to compensate some three.js versions mismatches.

I have one entry point (App.vue) and three js files. Index.js is used to create root container (scene, camera etc.) and the list of meshes and their replacements/removals. I may need it on subtitles change. I initialize the root container in constructor and apply its instance to Main instance. Also I could use this.meshes to store all my meshes if I would like to create multiple meshes at the same time.

Animation.mesh.js is used to create meshes, geometries, and materials and to prepare to move calculations to shader, while animation.options.js contains the shaders.

I will also create a class AnimationMesh, which extends Three.Mesh and can be used to hold all the calculations. Besides, I have a property called “time” here, which can be used to calculate current progress inside shaders.

Geometry

At first, I need to calculate the object width and height. Even though I use 3D animations, for now, I will animate only the frontal side to simplify prototyping. I am using TextGeometry now, but you can use any Geometry-based stuff.

To calculate the width and height of the object, I use the following approach:

const box = new THREE.Box3().setFromObject(new THREE.Mesh(prefabGeometry, new THREE.MeshNormalMaterial()));
const width = box.getSize().x;
const height = box.getSize().y;

Now, I have to split my geometry to separate chunks (faces). I will split my 3D object to a lot of triangles using this function:

BAS.Utils.separateFaces(prefabGeometry);

The next chunk of code creates geometry with all the methods needed to work with object faces.

const geometry = new BAS.ModelBufferGeometry(prefabGeometry, {
// setting this to true will store the vertex positions relative to the face they are in
// this way, it is easier to rotate and scale faces around their own center
localizeFaces: true,
// setting this to true will store a centroid for each face in an array
computeCentroids: true
});
// buffer UVs so the textures are mapped correctly
geometry.bufferUvs();

Now, I need to create animation delay for each chunk in order to control its position in the shader.

const aDelayDuration = geometry.createAttribute(‘aDelayDuration’, 2);
// these will be used to calculate the animation delay and duration for each face. There are some experimental values, you can change it to see what could happen
const minDuration = 0.8;
const maxDuration = 1.2;
const maxDelayX = 0.9;
const maxDelayY = 0.125;
const stretch = 0.11;
const totalDuration = maxDuration + maxDelayX + maxDelayY + stretch;
for (i = 0, offset = 0; i < geometry.faceCount; i++) {
centroid = geometry.centroids[i];
const duration = THREE.Math.randFloat(minDuration, maxDuration);
// delay is based on the position of each face within the original plane geometry
// because the faces are localized, this position is available in the centroids array
const delayX = THREE.Math.mapLinear(centroid.x, -width * 0.5, width * 0.5, 0.0, maxDelayX);
let delayY;
// create a different delayY mapping based on the animation phase (in or out)
if (animationPhase === ‘in’) {
delayY = THREE.Math.mapLinear(Math.abs(centroid.y), 0, height * 0.5, 0.0, maxDelayY)
}
else {
delayY = THREE.Math.mapLinear(Math.abs(centroid.y), 0, height * 0.5, maxDelayY, 0.0)
}
// store the delay and duration FOR EACH VERTEX of the face
for (j = 0; j < 3; j++) {
// by giving each VERTEX a different delay value the face will be ‘stretched’ in time
aDelayDuration.array[offset] = delayX + delayY + (Math.random() * stretch * duration);
aDelayDuration.array[offset + 1] = duration;
offset += 2;
}
}

In short, I iterate over all the faces, take each centroid, and calculate “duration” and “delay” for each face. aDelayDuration can be translated to vec2 in shader. That is why I use it as an array with 2 elements. Vec2 is a two-dimensional vector, a data structure used to save related information, such as 2D space coordinates. It is used to pass 2 variables at a time.

Then, I create aStartPosition and aEndPosition attributes. For now, I use centroids of each face for both aStartPosition and aEndPosition, so the animation starts and ends in the same position. It can be easily changed later.

geometry.createAttribute(‘aStartPosition’, 3, (data, i) => {
geometry.centroids[i].toArray(data);
});
geometry.createAttribute(‘aEndPosition’, 3, (data, i) => {
geometry.centroids[i].toArray(data);
});

Animation

I use 5 different random animations to reanimate our animation, and most of them use cubic bezier path. So I need 2 control points to calculate the path for each face.

const aControl0 = geometry.createAttribute(‘aControl0’, 3);
const aControl1 = geometry.createAttribute(‘aControl1’, 3);
const control0 = new THREE.Vector3();
const control1 = new THREE.Vector3();
const data = [];
for (i = 0, offset = 0; i < geometry.faceCount; i++) {
centroid = geometry.centroids[i];
// the logic to determine the control points is completely arbitrary
const signY = Math.sign(centroid.y);
control0.x = THREE.Math.randFloat(0.1, 0.3) * -10;
control0.y = signY * THREE.Math.randFloat(0.1, 0.3) * 0;
control0.z = THREE.Math.randFloatSpread(50);
control1.x = THREE.Math.randFloat(0.3, 0.6) * -10;
control1.y = -signY * THREE.Math.randFloat(0.3, 0.6) * 0;
control1.z = THREE.Math.randFloatSpread(-50);
if (animationPhase === ‘in’) {
control0.subVectors(centroid, control0);
control1.subVectors(centroid, control1);
}
else { // out
control0.addVectors(centroid, control0);
control1.addVectors(centroid, control1);
}
// store the control points per face
// this is similar to THREE.PrefabBufferGeometry.setPrefabData
geometry.setFaceData(aControl0, i, control0.toArray(data));
geometry.setFaceData(aControl1, i, control1.toArray(data));
}

Material

Let’s create the material. Pay attention to the commented code. You may use map variable in case you need to animate each vertex depending on texture colors.

// const texture = new THREE.Texture();
// texture.minFilter = THREE.NearestFilter;
const materialOptions = {
vertexColors: THREE.VertexColors,
// material parameters/flags go here
flatShading: true,
transparent: true,
// custom uniform definitions
uniforms: {
// uTime is updated every frame, and is used to calculate the current animation state
// this is the only value that changes, which is the reason we can animate so many objects at the same time
uTime: { value: 0 },
textureImage: null,
},
// uniform *values* of the material we are extending go here
uniformValues: {
// map: texture,
// textureImage: new THREE.TextureLoader().load(‘logo.svg’),
canvasResolution: {
x: window.innerWidth,
y: window.innerHeight,
},
metalness: 0.5,
roughness: 0.5
},
…shaderOptions,
};
const initialOptions = materialOptions;
const material = new BAS.StandardAnimationMaterial(materialOptions);

Functions

Here goes the main part of index.js. Another thing, here is a resetMaterial function, which is used to reset material shaders or specific options later. Here goes the tastiest thing in the project — shaders options (animation.options.js). I use it to set each face trajectory and color changes.

The first function is vertexFunction. I add reusable shader functions here.

vertexFunctions: [
BAS.ShaderChunk[‘ease_cubic_in_out’],
BAS.ShaderChunk[‘ease_quad_out’],
BAS.ShaderChunk[‘ease_back_in’],
BAS.ShaderChunk[‘ease_elastic_in_out’],
BAS.ShaderChunk[‘quaternion_rotation’],
BAS.ShaderChunk[‘cubic_bezier’],
BAS.ShaderChunk[‘ease_out_cubic’]
],

The next function is vertexParameters. I have three types of params in our code — uniform, attribute, and varying. In the first approximation, I use attributes on geometry level to describe data for each face, and uniforms are used on Material level to describe variables that are the same for all faces during each time period. Attributes are shader-level variables not connected to the outside world.

vertexParameters: [
‘uniform float uTime;’,
‘attribute vec2 aDelayDuration;’,
‘attribute vec3 aStartPosition;’,
‘attribute vec3 aEndPosition;’,
‘attribute vec4 aAxisAngle;’,
‘attribute vec3 aControl0;’,
‘attribute vec3 aControl1;’,
‘varying float tProgress;’
],

UTime is related to our mesh time field and used to calculate positions. tProgress is progress variable, while attributes are already described.

Next field is vertexInit. I define variables available for all the chunks there.

vertexInit: [
‘float tDelay = aDelayDuration.x;’,
‘float tDuration = aDelayDuration.y;’,
‘float tTime = clamp(uTime — tDelay, 0.0, tDuration);’,
‘tProgress = tTime / tDuration;’
],

Clamp function is used to find the closest point in range so that each face’s progress does not exceed 100%.

vertexPosition is used to calculate position in each moment of animation time for each face. I will also change it to 1 of 5 random values later in code, so now I show initial values only.

vertexPosition: [
`float scl = tProgress * 2.0–0.5;
transformed *= scl * scl;
transformed += cubicBezier(aStartPosition, aControl0, aControl1, aEndPosition, tProgress);`
],

Transformed is a vector that contains position of each face.

fragmentParameters are used to describe fragment shader variables.

fragmentParameters: [
‘vec3 vecDataF;’,
‘vec3 aStartPositionF;’,
‘uniform sampler2D textureImage;’,
‘uniform vec2 canvasResolution;’,
‘varying float tProgress;’
],

I use fragment shader to set the color of each face in the next field.

fragmentMap: [
`
vec2 uv = gl_FragCoord.xy / canvasResolution.xy;
float dist = 1.0;
diffuseColor = vec4(.5, uv.x, tProgress, 1.0);
`
]

gl_FragCoord is a built-in variable, which contains face coordinates. diffuseColor is another built-in variable, which is used to describe color under light.

App.vue

Now, let’s see some explanations about my entry point — App.vue.

Keep in mind! Do not store anything related to three.js in Vue data. It could potentially cause huge memory leak, because Vue keeps reference to objects, so the garbage collector cannot clear it.

When the component gets mounted, I instantiate the Main class to interact with all the stuff and load font to display text as 3D geometry.

mounted() {
this.main = new Main(AnimationOptions, this.$refs.container);
const loader = new THREE.FontLoader();
loader.load(“fonts/helvetiker_regular.typeface.json”, font => {
this.loadedFont = font;
this.startPlay();
});
this.onTrackPlaying();
}

I start the requestAnimationFrame sequence (onTrackPlaying). In this function, I find current subtitles based on audio currentTime and calculate current mesh time to calculate in-shader progress.

onTrackPlaying() {
if (!this.$refs.audio) {
requestAnimationFrame(this.onTrackPlaying);
return;
}
const currentTime = this.$refs.audio.currentTime;
//we use lodash there
const currentElem = _.find(
this.srtJSON,
x => x.start <= currentTime && x.end >= currentTime
);
if (!currentElem) {
requestAnimationFrame(this.onTrackPlaying);
return;
}
const srtCurrIndex = currentElem.id;
if (this.srtPrevIndex !== srtCurrIndex) {
this.setTextTexture(
currentElem.text,
currentElem.end — currentElem.start
);
this.srtPrevIndex = srtCurrIndex;
}
const diff = currentTime — currentElem.start;
const duration = (currentElem.end — currentElem.start) * 0.75; //we use 75% of time to be sure that our animation could be finished right in time
this.mainMesh.time = (this.mainMesh.totalDuration / duration) * diff;
requestAnimationFrame(this.onTrackPlaying);
},

I call this.startPlay on font load. It could load subtitles and track to our audio control.

I have a setCurrentText function here. It is used to remove previous mesh, to choose one of the random shaders trajectory calculations, and to set this text to the center of the screen.

setCurrentText(text, duration) {
this.main.removeAllMeshes();
this.mainMesh = this.main.addAnimationMesh(
“current”,
new THREE.TextGeometry(text, {
font: this.loadedFont,
size: 5,
height: 0,
style: “normal”,
bevelSize: 1,
bevelThickness: 0.03,
bevelEnabled: false,
anchor: { x: 0.5, y: 0.0, z: 0.5 }
}),
duration
);
const trajectoryList = [
`
float scl = tProgress * 2.0–0.5;
transformed *= scl * scl;
transformed +=
cubicBezier(aStartPosition, aControl0, aControl1, aEndPosition, tProgress);
`,
`
float scl = tProgress * 2.0–0.7;
transformed *= easeQuadOut(scl);
transformed += easeQuadOut(tProgress) * aEndPosition;
`,
`
float scl = tProgress * 2.0–0.9;
transformed *= scl * scl;
transformed +=
cubicBezier(aStartPosition, aControl0, aControl1, aEndPosition, tProgress);
`,
`
float scl = tProgress * 2.0–0.7;
transformed *= easeQuadOut(scl);
transformed +=
cubicBezier(aStartPosition, aControl0, aControl1, aEndPosition, tProgress);
`
];
const randomTrajectory =
trajectoryList[Math.round(Math.random() * trajectoryList.length + 1)];
if (randomTrajectory) {
this.mainMesh.resetMaterial({
vertexPosition: randomTrajectory
});
}
const camera = this.main.root.camera;
camera.lookAt(this.mainMesh.position);
this.mainMesh.position.x -= this.mainMesh.sizes.width / 4;
camera.position.z = 150;
// this.main.restartAnimation(); //Could be used in case you would like to use GSock
}

What goes next?

To run the project, please, visit this GitHub profile of my friend and colleague, go to the folder “js”, and run 2 commands:

npm install -g @vue/cli-service-global 
vue serve

The first one installs a global package, which allows serving vue for fast prototypes, the second command runs the project. Then, go to http://localhost:8080/, and if the track cannot be played, just click on play button audio control.

This is just a small snapshot of what you will have in the end.

For those who do not want to implement the code themselves, I have filmed a short clip of the animation created with this code. To watch a video of the project in action, contact us or email me via maxim.savonin@keenethics.com.

It is a simple example of subtitles animated with Vue.js development. Do you have other interesting tips or lessons to share?

The article was originally published at Bits and Pieces.

--

--

Max Savonin
Vue.js Developers

CEO at KeenEthics, your ethical web and mobile development partner.