Building a 3D interactive With React and ThreeJS

Dhia Shakiry
The Startup
Published in
22 min readFeb 14, 2020

Working in a museum as a web developer, I’ve had the opportunity to develop several touchscreen interactives with web technologies. I’d used ThreeJS before but paired with JQuery and then AngularJS. With this project I switched to React as we were looking to use it for our web stack.

As the interactive was for a touring exhibition, it would need to be distributed and installed by third parties, so I also wanted to package it up with Electron for a better installation and improved error logging/recovery (stepping up from a local application running on Chrome in kiosk mode). I’ll cover the Electron side in a separate article as it’s largely independent from what’s described here and this will be a long read already :).

This detailed run through will cover the decisions I took when mixing a Webgl scene with other content, balancing learning against the brief and what I was trying to achieve visually, as well as some of the gotchas I encountered along the way.

It assumes a basic knowledge of React and 3D scene concepts (3D models (meshes), positioning, camera, lights).

NOTE: React 16.8 had just come out as I was in the middle of this project last year, the introduction of React hooks did lead to some confusion. In hindsight the hooks approach would have helped a fair bit by pushing me into building more stateless components, excessive prop and state usage was definitely slowing me down towards the end.

The brief

The finished application

The interactive presents the museum’s interpretation of 2 ancient marine reptiles (Liopleurodon and Plesiosaur), we had untextured 3D models for both, and the models were to be used as central elements where topics about each specimen could be explored, using animated transitions throughout. There would be video, image and text content for each topic.

The interactive also had to be multilingual, with the ability to switch the language at any point by the user and with a simple process to add and update language variants.

Finally as a public kiosk it would obviously need to be stable and memory-leak free, an often overlooked requirement for typical web applications where browser sessions need to last minutes rather than constant use all day, countering the easy requirement of only having to support a single platform and not having to worry about payload or initialisation time.

We limited the hardware requirements to typical desktop PC hardware and 16:9 touchscreen for client availability/costs, which meant the 3D and animations had to be able to run adequately (60fps) without a dedicated GPU.

Project Initialisation

I bootstrapped with react-simple-boilerplate instead of the usual create-react-app, it’s a little more barebones, bringing only the crucial Webpack dev server with hot reload for localhost development. For the CSS, I used Sass in separate partials files as I was more familiar with it at the time than CSS-in-JS or CSS modules. Over the projects’ duration I installed npm dependencies for loading JSON (axios), SVGs (icons), ThreeJS, an idle timer component and Electron.

final npm list (could be cleaned up!)

"devDependencies": {
"babel-core": "^6.24.1",
"babel-loader": "^7.0.0",
"babel-preset-es2015": "^6.24.1",
"babel-preset-react": "^6.24.1",
"babel-preset-stage-0": "^6.24.1",
"copy-webpack-plugin": "^5.0.0",
"css-loader": "^0.28.0",
"electron": "^6.0.0",
"electron-chromedriver": "~2.0.0",
"electron-builder": "^20.40.2",
"electron-is-dev": "^1.1.0",
"electron-log": "*",
"node-sass": "^4.5.2",
"react-idle-timer": "^4.2.5",
"sass-loader": "^6.0.3",
"svg-inline-loader": "*",
"svg-inline-react": "*",
"three": "^0.104.0",
"webpack": "^4.29.1",
"webpack-cli": "^3.2.3",
"webpack-dev-server": "^3.1.14"
},
"dependencies": {
"axios": "^0.18.0",
"babel-loader": "^7.0.0",
"electron-is-dev": "^1.1.0",
"fs": "*",
"is-electron": "^2.2.0",
"electron-log": "*",
"react": "^16.8.0",
"react-autobind": "*",
"react-dat-gui": "^3.0.0",
"react-dom": "^16.8.0",
"react-idle-timer": "^4.2.5",
"react-router-dom": "^4.3.1",
"react-transition-group": "^2.5.3",
}

ThreeJS integration approach

I’d read about a few options on how to integrate threeJS with React (react-three, react-three-fiber), however these seem to focus around leveraging React’s component style for threeJS scene building, in a similar way A-frame does for HTML. I’d also read a few articles on using ‘raw’ ThreeJS with React, which appealed more for the reduced library overhead and direct control, being familiar with ThreeJS already.

Several of ThreeJS’s ‘common’ utilities (like 3D file loaders and camera controls) are not part of the core Three module but exist under /examples, so it’s important to identify these to Webpack alongside Three itself (I used the providePlugin configuration below) so they can be imported where needed.

ThreeJS CTM file loader webpack configuration:

plugins: [
new HtmlWebpackPlugin({
template: './index.html'
}),
new webpack.ProvidePlugin({
THREE: "three"
}),
new webpack.ProvidePlugin({
CTM: "three/examples/js/loaders/ctm/ctm.js"
}),
new webpack.ProvidePlugin({
LZMA: "three/examples/js/loaders/ctm/lzma.js"
}),

3D file format

As in my earlier projects, the 3D meshes to be used were from an external source as.OBJ or .FBX formats. I’d encountered Chrome memory issues loading large and/or several meshes into a Webgl scene before, so the solution I default to is the openCTM file format, an ‘old school’ geometry compression Python tool that can get an .OBJ down to a few % of it’s original size (e.g 70MB to 1.8MB) with no noticeable loss in mesh detail. However the glTF format (JSON based) appears to be the new standard for a 3D web format (it can likely handle 3D textures, animations, lighting that CTM possibly can’t). It would be interesting to compare size and runtime unpack performance against CTM in an online Webgl project.

As the 2 models came from different sources, I first did some scaling and centering-to-origin for both meshes in Blender so they roughly matched each other before converting them to CTM. This meant I wouldn’t need to apply individual scaling and positioning to each when loading them into the scene.

React Application Overview

React code structure

With the project’s technical dependencies in place and the UX and content process coming along with the team, the React components could start being defined. We had 2 distinct routes for this SPA. An attractor screen that would show a looping video, and the threeJS scene content (which handles both sets of models/content).

There was also the navigation component, language selection panel and content components, as well as annotations for the 3D content.

React context and reuseable Webgl renderers

As the interactive would need to transition seamlessly through its content, I decided early on to simply hold most of the data it uses in memory instead of refetching for each user navigation (even thought it was from local JSON files), and this meant using React’s Context API for the application’s ‘global’ state (AppContext) so that it could be used by components where required.

I ended up storing more here than I planned (beyond the JSON objects holding configuration and language variants). Both Webgl renderers (the container for a 3D webgl scene) were initialised on first load and added to the AppContext so they could be re-used as the user navigates between the 2 specimen routes. Something I realised I needed due to the nature of how a browser initializes and deinitializes a Webgl renderer context, how that interplays with React’s component mounting/unmounting and the fact I needed to transition cleanly between 2 Webgl renderers.

In previous projects I’d found managing multiple user-initiated transition animations easier when using a single global ‘isAnimating’ property which I use to block or ‘debounce’ user interaction that could cause unintended animation or state sync issues. I didn’t want parts of the UI to appear or update jarringly as different interactions triggered component updates. This Global control state makes securing a UI against random user interaction more manageable and if applied sparingly and consistently, doesn’t frustrate the user.

Initialisation sequence

  • The 2 (empty) webgl renderers are initialised and added to global AppContext.
  • The app synchronously loads the local JSON files that contain the interactives’ configuration (timeouts, 3D settings, enabled languages etc.), then the content structure and language variants. I used Axios as that has easy support for loading from file://.
  • Next, the two 3D model files (.CTMs) and some supporting shader and sprite files are loaded using ThreeJS’s helper loaders (CTMLoader, NodeMaterialLoader, TextureLoader).
  • The root App component does not render until all the above has loaded and been validated. once ready, the render simply returns the AppRouter component which defaults to the attractor video route.

Routing/Transitions

This was a fairly standard React router implementation, but I had to use React’s HashRouter instead of the more typical BrowserRouter as the latter does not work when running the application under the file:// protocol (which is the target environment, Electron).

The code excerpt below shows the react-transition-group implementation which handles the CSS ‘fade’ transition between the routes, and a dynamic ‘specimen’ route. Note the global_setIsAnimating which blocks route switching on navigation links during the route transition.

AppRouter.jsx

<TransitionGroup className="transition-group">
<CSSTransition {...props} key={location.pathname}
timeout={{ enter: 1100, exit: 1100 }}
classNames='fade'
onEnter={() => {
props.global_setIsAnimating(true);
}
}
onExited={() => {
props.global_setIsAnimating(false);
}
}
>
<section className="route-section">
<Switch location={location}>
<Route exact path="/landing" component={Landing} />
<Route exact path="/specimen3Droute/:specimen?" render= {props => <Specimen3DRoute key={props.match.params.specimen} specimenPath={props.match.params.specimen} />} />
<Redirect to="/landing" />
</Switch>
</section>
</CSSTransition>
</TransitionGroup>

‘Fade’ transition scss (the suffixed classes get added by React during the transitioned component’s mount/unmount events)

$transition-time: 0.8s;
$transition-delay: 0.2s;

#transitionDiv {
.fade-enter {
opacity: 0;
}
.fade-enter.fade-enter-active {
opacity: 1;
transition: opacity $transition-time linear;
transition-delay: $transition-delay;
}
.fade-exit {
opacity: 1;
}
.fade-exit.fade-exit-active {
opacity: 0;
transition: opacity $transition-time linear;
transition-delay:$transition-delay;
}
Fade transition between 2 webgl renderer routes — a single shared Three.clock and Three.nodeFrame passed to each renderer via appContext allows each scenes’ animation timings (e.g. the caustics shader) to sync up.

The 3D scene route

This route component acts as a controller for the 3D scene component and the content slider panel component which animates in when a 3D annotation is clicked. Many of the global ‘assets’ (renderers, language data) are passed to this component via the AppContext, these are then passed down to the child components as render props. I used this setup as I didn’t want to access the AppContext’s global isAnimation state directly in the child components, preferring to have interaction events pushed up to the top level route component from the children so that they could be handled in one place or pushed further up globally (again using AppContext variables or functions).

Whilst I think the strategy was sound, it still proved difficult to keep track of individual state timings that acted like a sequence chain, so I felt I couldn't avoid a state hierarchy which led to some closely coupled components. Stepping back and diagramming state flow with an aim to simplify it is definitely something I need to do more of.

ThreeJS / 3D scene component

The heart of the application, this rather complex React component initialises and renders the ThreeJS content and the Annotation component, implements the requestAnimationFrame loop (Webgl’s control timer) and finally disposes all initialised objects from the Webgl scene when it unmounts (on route change).

A much condensed version of the component scene logic below (warning: incomplete code). Note the reused renderer but (re)initialised scene objects on componentMount and the /examples module location for the standard(but not core) Three orbitControls.

 import React from "react";
import autoBind from "react-autobind";
import * as THREE from "three";
import { OrbitControls } from "three/examples/jsm/controls/OrbitControls.js";

class Scene3DComponent extends Component {
constructor(props) {
super(props);
autoBind(this);


componentDidMount() {
const width = this.threeContainer.clientWidth;
const height = this.threeContainer.clientHeight;
this.scene = new THREE.Scene();
this.renderer = this.props.reusedRenderer; //from global context

//initialise other scene objects (meshes, shaders, lights etc.)
this.controls = new OrbitControls(this.camera,
document.querySelector("#ThreeJScomponent");

this.threeContainer.appendChild(this.renderer.domElement);
this.start();
);

start = () => {
if (!this.frameId) { //frame ticker
this.frameId = requestAnimationFrame(this.animate);
}
};

stop = () => {
cancelAnimationFrame(this.frameId);
};

animate = () => {
this.controls.update();
this.renderScene(); //perform animation/shader updates here
this.frameId = window.requestAnimationFrame(this.animate);
};
}
componentWillUnmount() {
this.stop(); //stops the render loop
this.sceneDestroy(); //uninitialise camera,lights,meshes etc.
}

render() {
return (
<div id="ThreeJScomponent"
ref={threeContainer => {this.threeContainer = threeContainer;}}
/>
);
}
}

I separated out the ThreeJS ‘actors’ as individual class constructors that are imported into the React component for initialisation and loading into the ThreeJS scene. Some, like the camera and lights are standard ThreeJS implementations. Others, like the specimen (which automatically positions and scales based on the camera viewport) or the terrain, are doing more and are passed the camera and scene references in their constructors.

3D scene classes

Additional elements like the particles, caustics and post processing shaders provide some underwater scene atmosphere and were rewarding to add.

Particles

Codepen provided the inspiration for this, thousands of randomly positioned (within the scene’s bounds) THREE.vector3 objects (signifying a point in 3D geometry) are instantiated and added to a parent 3D object. They are assigned a sprite texture via THREE.Points particle renderer , in this case a simple white spot .png.

Their Z axis position is updated frame by frame by a call from the main Webgl requestAnimationFrame loop with some randomness applied to each individual Point to simulate drifting particles, once their position was beyond the scene’s boundaries, the particle positions are reset back to an off-camera starting position so they could loop back into the animation (a better approach than constantly generating new particle objects). Because they were all part of the same single geometry/3D object (an efficient way to render), their numbers had no real effect on the application’s framerate until it got into the millions (2000 looked about right for the scene).

//particles
this.particleCount = 2000;
let moverGroup = new THREE.Object3D();
scene.add(moverGroup);
this.pGeometry = new THREE.Geometry();

for (let i = 0; i < this.particleCount; i++) {
let vertex = new THREE.Vector3();
vertex.x = 4000 * Math.random() - 2000;
vertex.y = -700 + Math.random() * 700;
vertex.z = 5000 * Math.random() - 2000;
this.pGeometry.vertices.push(vertex);
}
let material = new THREE.PointsMaterial({
size: 4,
map: sprite,
transparent: true,
opacity: 0.5,
blending: THREE.AdditiveBlending,
alphaTest: 0.05
});

let particles = new THREE.Points(this.pGeometry, material);
particles.sortParticles = true;
moverGroup.add(particles);

//particles frame animation - called from main render loop
this.particleAnimate = () => {
for (let i = 0; i < this.particleCount; i++) {
this.pGeometry.vertices[i].z += 0.1;
if (this.pGeometry.vertices[i].z > 2700) {
this.pGeometry.vertices[i].z = -2000;
}
}
this.pGeometry.verticesNeedUpdate = true;
};
200,000 particles –60fps
2 million particles — 21fps

Shaders — Caustics and Post processing

Achieving the animated water ‘caustics’ effect on the terrain proved to be an exercise in persistence. Like most natural ‘effect’ animations achieved in Webgl, they are generated from low level vertex or fragment (pixel) shaders. Shaders act as algorithms that transform the properties of either a 3D geometry’s vertices or the scene’s pixels. Fragment shaders work with any pixel input, meaning they can be used to create effects like distortion or smoke over other web components like text and images. Their low level nature and Opengl origin means they are defined with a C-like syntax, additionally there’s no easy way to debug them in the browser once they’re running which can make working with them difficult.

There are several community Webgl shader sites (shaderFrog, shaderToy) where you can browse submitted shader code and play around with values to affect their output, a good way of learning some of the concepts. (e.g. https://www.shadertoy.com/view/XttyRX). Shaders can be applied to individual materials or across the whole 3D viewport and are often combined together (so that multiple shader algorithms or ‘passes’ are affecting vertex or pixel properties in a chain per frame) to create more complex effects.

Once Javascript has initialized the shader code, the browser runs the shader directly on the GPU, so it can achieve impressive results even on mobile devices (if its a well designed shader!). It’s a side of Webgl (and closer to general 3D graphics programming) that I hadn’t had exposure to so was keen to get a taste of, as understanding it is the path to more sophisticated Webgl content generation.

Voronai ‘fake caustics’ shader material

Whilst I was able to import caustics style shaders as a material from sites like Shadertoy (skilling up to write my own shader was out of scope unfortunately), they didn’t keep the standard ThreeJS material properties like light reflection, colour and shadow mapping — outputting instead a pitch-black material with just the shader animation, not exactly useful when trying to create some realism. What was happening here was the custom shader was overwriting the default material behaviour (which is itself a shader implemented internally by ThreeJS), the only solution being to write a custom material shader that could combine both, again out of scope for me.

However ThreeJS provided the solution with its’ nodeMaterial functionality. This allows shaders to be combined in a modular, declarative fashion without having to rewrite the shader algorithms yourself, it’s still a work in progress for the Three community but has the potential to accelerate Webgl popularity for Javascript developers by providing a route to more sophisticated visuals. Webgl’s learning curve has a real jump once past the usual ball-and-light hello worlds, primarily because of the leap into the shaders.

Using a ThreeJS example I found that implemented both nodeMaterial and a caustics-style shader (https://threejs.org/examples/webgl_materials_nodes.html), I combined that with another code example that procedurally generates (using Perlin noise) a random terrain mesh. Coming together in the pleasing gif above.

Lighting / dat.GUI

Setting up lights is one of the first things you encounter when learning ThreeJS, whilst the concepts are easy to understand (a good overview -http://blog.cjgammon.com/threejs-lights-cameras), the interplay between their properties and the various material properties make it hard to apply trial and error to achieve satisfying lighting. I ended up with a simple combination of ambient, directional and spot lights, providing general gamma, diffuse lighting from above and intense close range light to pick up the specimen detail. ThreeJS’s scene fog was also added for a more underwatery effect.

The trial and error process to get the right combination can be frustrating, I’d always seen control panels in ThreeJS examples and finally came across their implementation (it’s surprising how so few threeJS tutorials mention it). Dat.GUI is a Google helper library that provides out of the box on-screen UI elements (inputs, sliders, colour palettes) that you can bind to any variables in your application for real-time manipulation, this lets you tweak multiple properties that would normally need an application reload, improving the experimentation process.

Helpfully there’s a great React implementation (https://github.com/claus/react-dat-gui) that is straightforward to set up, it’s a React component that configures the various child UI types you need. Whenever a dat.GUI value is updated, your prop handler function is called and you can update in your code where needed. Use dat.GUI’s starting state to set the UI elements to your starting application values. A simple console.log can make capturing the current state easier after several tweaks so you can update your applications’ default values.

‘Debug mode’ — Spotlight helper and dat.GUI panel to adjust lighting/material properties

Post-processing

The final part of my scene improvements led me into looking at ThreeJS’s post processing shader implementations. These include effects such as bloom, depth of field (DOF), film grain, vignettes as well as anti-aliasing. These effects contribute to the overall ‘realism’ of a scene if implemented well, something I was keen to spend a bit of time on as I wasn’t happy with my simple lighting setup which felt quite flat.

Left: no post-processing, Right: bloom, DOF blur and film grain

Three’s extensive examples made it fairly simple to add these various effects, other implementations can be found in the community so it pays to experiment. Each post-processing shader also has a range of input values that modify the effects. Since each effect is an individual shader, ThreeJS provides a mechanism to layer them so their output can be merged (in a similar way to nodeMaterial described earlier). Here they’re called ‘shaderPasses’ and they’re applied to the Webgl renderer via ThreeJS’s EffectComposer.

Each shader pass runs on every frame, and each has varying levels of GPU computational cost which will affect performance. I monitored the framerate with Chrome dev tools as each effect was added. I ended up leaving out the DoF shaders (really blur shaders of configurable intensity) as they were quite expensive and difficult to get right even though I liked the effect.

One thing to note is that when EffectComposer is used to render, you lose Three’s default anti-aliasing implementation. There are FXAA and MSAA (anti-aliasing) shader passes that can be added back to the shader composition, but they need to be manually configured for good results.

Composing shaderPasses:

renderPass = new THREE.RenderPass(scene, camera);
fxaaPass = new THREE.ShaderPass(THREE.FXAAShader);
hblur = new THREE.ShaderPass(THREE.HorizontalTiltShiftShader);
vblur = new THREE.ShaderPass(THREE.VerticalTiltShiftShader);
filmPass = new THREE.ShaderPass(THREE.FilmShader);
bloomPass = new THREE.UnrealBloomPass(
new THREE.Vector2(window.innerWidth, window.innerHeight),
1.5,
0.4,
0.97
);
let dpr = renderer.getPixelRatio();
let uniforms = fxaaPass.material.uniforms;
uniforms["resolution"].value.set(
0.5 / (window.innerWidth * dpr),
0.5 / (window.innerHeight * dpr)
);
fxaaPass.renderToScreen = true;

this.composer = new THREE.EffectComposer(renderer);
this.composer.addPass(renderPass);
this.composer.addPass(hblur);
this.composer.addPass(vblur); //? fps
this.composer.addPass(filmPass); //?fps
this.composer.addPass(bloomPass); //4-5fps
this.composer.addPass(fxaaPass); //4-5fps cost

Don’t forget to render the composer output in your render loop!

renderScene = () => {
this.shader.composer.render(0.1);

3D Annotations

The last part of the ThreeJS functionality I’ll explore here is how the React annotation components keep their 3D position on the specimen as the scene camera moves. Being able to connect normal DOM elements with scene camera position really throws up some great possibilities for interesting web interactions.

React annotations

I started off by using Three’s Raycaster as a utility to obtain the 3D scene position of each of the pins that the labels would attach to. The Raycaster can be thought of as an intersect line going from your 2D mouse position into the 3D space, it can be implemented to output the 3D position where it intersects a scene element (in this case the specimen model). This meant I now had a set of 3D positions (points) saved in a configuration file for each annotation.

The annotations themselves are React components that render the pins and text boxes as absolutely positioned divs (offset from each other) , with the linking line as a computed SVG path calculated from the pin and text box’s CSS values. However there’s nothing yet in the CSS that links them to the 3D space, so they simply appear at the bottom of the page when loaded.

The clever bit is something again I can thank a Codepen contributor for, where I found an example that maps a CSS absolute position to a 3D vector point. With the annotation pins ‘world position’ already known, a function (called on every frame in the main render loop) projects these positions to get the camera’s normalised screen space (the x/y ‘screen’ position). This is then used to derive absolute position CSS values which can be applied to each annotation element as simple inline CSS style updates.

This felt like it would be a very expensive browser operation (multiple CSS style rules being updated per frame), but the performance held up fine on PC hardware at least. A necessary addition was the flip effect that can be seen in the GIF above whenever an annotation needs to move to the other side of the screen to prevent them overlapping. This was achieved by applying a CSS class (left/right) that toggles a CSS offset anytime an annotations’ world position would cross the Z-axis, the SVG connecting line needs to be recalculated at this point.

Annotation position update called from main render loop

updateScreenPosition = offset => 
{
//loop through array of interaction points and update position
let styleString;
for (let i = 0; i < this.props.annotationPositions.length; i++)
{
let poi = new THREE.Vector3(
this.props.annotationPositions[i].position3D.x,
this.props.annotationPositions[i].position3D.y + offset,
this.props.annotationPositions[i].position3D.z
);
poi.project(this.camera);
poi.x = Math.round((0.5 + poi.x / 2) * this.renderer.domElement.width);
poi.y = Math.round((0.5 - poi.y / 2) * this.renderer.domElement.height);

styleString = "top: " + poi.y + "px; left: " + poi.x + "px;";
this.annotations[i].setAttribute(`style`, `${styleString}`);
}

//z-axis test to reverse position of horizontal annotations (ic1,ic4)
if (this.camera.position.z < 0 && !this.state.leftSideRotation)
{ this.setState({ leftSideRotation: true }); }
else if (this.camera.position.z >= 0 && this.state.leftSideRotation)
{ this.setState({ leftSideRotation: false }); }
};

Three’s camera.position.distanceTo function could also be used to determine if a pin’s 3D world position goes behind the specimen mesh in relation to the camera, allowing us to update the opacity of the whole annotation element so it could fade out when ‘obscured’. In the end though our user testing showed this was adding confusion to the labels and not really necessary.

Finally the annotations (React components remember) also had their own click handlers and CSS transition effects, as they needed to act as links to reveal the related content.

Content Slider

Content slider mount/unmount transition

We’re back in React world now, with each Annotation button’s onClick event calling a prop handler to display it’s relevant content panel, we still wanted to keep the 3D scene central and visible which meant this content would need to animate in as a child component (instead of a full page route/transition). React’s transition group was again used to control the animation as this Component was mounted and unmounted with new content props, leaving enough time (by using the appContext isAnimating ‘debounce’ property to block interactions during transitions) to swap in the content for the selected topic.

Within the content panel there were different types of content for each topic, image and video components were easy enough, but the ‘quiz’ component had some extra state to handle the question + response reveal.

With the JSON content (organised by annotation ID) already loaded into appContext, the content panel would get the triggered annotationID as a prop so it could render the correct content off-screen before it’s slide-in animation begins, a small CSS animation delay of 0.2 secs is something I typically use to allow the component time to render before animation.

Language variants

A ‘current language’ JSON object in appContext and was the data source for all components that rendered any content. When the language selection popup was used to change the language, the current language object is simply swapped and all components would re-render automatically with the new text variant as it was all passed through props.

sample content structure JSON

{
"ID": "ic5",
"type": "text",
"caption": "How did they breathe?",
"boxOffsetX": "400",
"boxOffsetY": "200",
"contentPanel": {
"assetType": "video",
"assetSrc": "./assets/video/ic5.mp4",
"vidHeight": "540",
"text1": "Ichthyosaurs couldn’t breathe underwater. Each dive into the deep sea could only last as long as a single breath.",
"text2": ""
}

Accompanying language variant JSON

{
"caption" : "تنفس الهواء",
"contentPanel" :
{
"text1" : "تمامًا مثل الزواحف اليوم ، لم تستطع الإكثيوصورات التنفس تحت الماء. كل رحلة إلى أعماق البحار لا يمكن أن تستمر إلا في نفس الوقت.",
"text2" : ""
}
}

With Arabic being a required language, the language configuration file also included a right-to-left language flag. This was used to apply the right-to-left CSS class to text elements as well as a few layout changes to accommodate the RTL layout.

.router.RTL {
h1,h2,h3,p,span
{
direction: rtl;
unicode-bidi: embed;
}

Chrome bug #1

As can be seen in the content slider gif above, video elements are mounted and unmounted as the user navigates through the content (this interactive would be running constantly all day). During my periodic memory monitoring and code cleanup sessions (using Selenium and Chrome webdriver to loop test interactions), I noticed the memory was slowly but surely increasing and that Chrome dev tools was reporting detached HTML elements as the source of the leak.

Being still fairly new with React I was convinced it was something with my implementation (a previous AngularJS project also featured a lot of video element swapping without issue) but nothing I tried fixed it. Eventually I found a recent bug report on Reacts’ Github https://github.com/facebook/react/issues/15583 (unmounting components with HTML video elements was causing the detached HTML elements), which pointed the finger at a Chromium bug https://bugs.chromium.org/p/chromium/issues/detail?id=969049, this was all happening as I was coming up to the project’s deadlines.

Someone had posted that loading the videos in an IFrame could get round the bug. I definitely balked at the idea but had no choice but to try it. I modified the React video component to render out the iframe document and responsively styled it as best I could. The videos were different aspect ratios but I knew the dimensions, so I could include the height in the content’s JSON and suitably size the iframe container. Thankfully this rather hacky fix did resolve the memory leak.

componentWillMount () {
this.vidHeight = "100%"
if(this.props.vidHeight) {
this.vidHeight = this.props.vidHeight + "px";
}

this.iframeHTML = "<style>body { margin: 0; border: none; overflow: hidden;}</style>"
this.videoHTML = "<video style='width:100%;height:100%;' src=" + this.props.src + " id='" + this.props.id + "' loop='loop' preload='auto' autoPlay muted></video>";
this.iframeHTML = this.iframeHTML + this.videoHTML;

render() {
return (
<iframe frameborder="0" seamless width="100%" height={this.vidHeight} srcdoc={this.iframeHTML}>
</iframe>
)
}

Chrome bug #2 (or was it Windows 10..)

Another bug surfaced whilst I was dealing with the above. Alongside memory testing I tend to do lots of manual touchscreen testing (I’ve not been able to easily replicate constant random multi-touch input through selenium webdriver scripting, or the school kids test as we call it). The interactive was crashing back to desktop after a period of touch inputs (on both Chrome and Electron environments) with no errors being logged on either platforms, great.

I’ve not had a reason to look into a different way of debouncing and monitoring event inputs, as issues can usually be traced back to mistakes in code but here I was stumped, particularly as I was still chasing bug #1. Luckily that was fixed but I still had this problem.

I could usually replicate the crash with lots of touch input on the 3D scene (basically mashing the screen with lots of fingers for a few minutes), so naturally I started looking at the version of ThreeJS’s camera controls I was using, trying a few different versions of the controls module.

I then moved onto using an older version of Electron (to roll back to an older version of Chromium) but still no luck. Finally (and luckily) I came across another Chrome bug report (I found it because I searched for ‘Elo’, the touch screen brand we were using, and someone had helpfully replied with the term) https://bugs.chromium.org/p/chromium/issues/detail?id=874948.

It pointed to (pun partially intended) an error in Chrome’s touch move handling on Windows 10. All I needed was the fix posted, adding ‘ — disable-features=PointerEventsForTouch’ to the Chrome flags, something I could easily do in Electron’s configuration. Not long after (but after the project’s delivery date), the bug was fixed officially in a Chrome update.

Final thoughts

The things I took away from this project were that the web landscape is a difficult target for stable interactive software, with Chrome and Windows 10 being particularly troublesome recently. It still makes sense for us to stick to a web stack for interactive work purely from a resource/shared skills perspective, but a locked down Linux and Electron configuration are likely better target environments to explore.

The basics of a templated interactive 3D viewer were achieved as well, with content entirely defined by structure and language JSON files. Dynamic routes and camera positioning meant different models and content could be swapped in relatively easily.

For React, the complex state management I ended up with was manageable but far from ideal. I’m currently working on a NextJS website build (Typescript and hooks) so it’s good to have the hooks comparison and lessons learnt.

Finally with Webgl, I’m always keen to push what I can do with it. Mesh animation, physics, shaders (including volumetric lighting, something I really wanted to achieve for a proper ‘underwater’ feel) and better lighting and material setups being new challenges I’d set myself where the opportunity arises.

--

--