3D Data Visualization with React and Three.js
--
At the end of 2019, we had a hack week at Cortico where all of the technical staff got together to explore new ideas without any requirements or limitations. I ended up in a small team with NLP expert Doug Beeferman and one of our technical fellows, Aneesh Naik. They were going to experiment with new approaches of analyzing our Local Voices Network conversation data, including applying BERT for semantic clustering and using DeepMoji to try and explore emotional content. My role was to build a fun UI to explore the data, which is what we’ll explore today.
Here’s what I ended up making:
In this post, I’ll go over how to create something similar to what I did using React, Three.js, and react-three-fiber as the magical library connecting the two.
Table of Contents
- Step 1 — Getting Started (CodeSandbox)
- Step 2 — Rendering the Data Points (CodeSandbox)
- Step 3 — Using InstancedMesh for 100,000 points (CodeSandbox)
- Step 4 — Computing Different Layouts (CodeSandbox)
- Step 5 — Animating Positions (CodeSandbox)
- Step 6 — Adding Interactivity: Selecting a Point (CodeSandbox)
- Step 7 — Bloom: Adding a Glow Effect (CodeSandbox)
- Step 8 — Finishing Touch: Resetting the Camera (CodeSandbox)
- Please Say This is The End
Quick Links
Step 1 — Getting Started
There’s nothing special you have to do to get started that you wouldn’t do for any other React project. For me, that means using create-react-app.
$ npx create-react-app r3f-demo
$ cd r3f-demo
$ npm install --save react-three-fiber three
The first thing to do is draw a simple cylinder with some lights and a trackball controller to move the camera around. This will get all the basic scaffolding we need to start playing with Three.js in React-land.
Just like any other React app, we’re going to be making components to do this. Let’s start with two:
- ThreePointVis — The root 3D component with the Canvas
- Controls — The component that allows us to control the camera with the mouse or trackpad
First, we’ll update App to include our new ThreePointVis component.
Note: I’ll be using screenshots of code here but will include CodeSandboxes along the way with full working code at each step.
Our initial ThreePointVis will render a cylinder and some lights while also including Controls to enable us to move the camera around.
Let’s break down what we’re seeing here before moving on to Controls.
First, we have a Canvas component which is the base component we need to use react-three-fiber. It takes a few props but we’ll only specify the camera here to keep it simple. You’ll note that since we’re in 3D, all positions will take a three dimensional tuple: [x, y, z]
(ooOOooOoo z, fancy), so to place the camera away from our cylinder, we can place it at (0, 0, 5) or 5 “units” away from the origin on the Z axis. These units don’t correspond to pixels or other standard units of measure, they just specify a position in the arbitrary coordinate space we’ll be building our vis in. The middle of the screen, by default, is at (0, 0, 0).
Next, we add a few lights with simple props specifying their colors and how bright they are. You can read about AmbientLight and HemisphereLight in the Three documentation. Be careful to note that the initial letter in these tags is lowercase: <hemisphereLight> not <HemisphereLight>— they won’t work otherwise (and same for all Three components when using react-three-fiber). In the world of 3D, we need lights or we won’t be able to see anything, so don’t forget them!
Finally we create a cylinder object in our scene through the use of a <mesh> containing a geometry and a material. You’ll see that we apply positional transformations at the <mesh> level, not to the geometry itself, something I always mix up. As with the lights, you can find details about all available parameters in the Three docs: Mesh, CylinderBufferGeometry, MeshStandardMaterial. I’ll point out two important parts:
- The args prop corresponds to what is provided to the constructor of these objects. For CylinderBufferGeometry, this means we are specifying radiusTop (0.5), radiusBottom (0.5), height (0.15), and radialSegments (32).
- The attach prop specifies how these children components should be connected to their parent (the mesh in this case). See the react-three-fiber docs for more info.
Lastly, we add Controls to get some mouse control of the visualization. Most commonly this is OrbitControls, but here I’ve opted for TrackballControls, which I liked better for this UI after trying both. (For an example of a similar data vis using OrbitControls see my NBA 3D experimental app).
A couple things to note about this code:
- We are using TrackballControls which lives in the examples directory in Three, so we have to
extend()
Three to include it for it to work with react-three-fiber. - We need to use the
useFrame()
hook to have the camera update every frame based on the state of the controls. - I’ve overridden default behavior to have left-click pan instead of rotate since I want it to act like a slippy map. Two finger drag or alt/option drag will rotate on a trackpad.
- The dynamicDampingFactor gives the controls some momentum and makes it feel a bit more natural.
With all these components set up, we have a little playground with a 3D cylinder ready to build out. Let’s check it out!
Step 2 — Rendering the Data Points
Now that we have the basic structure of our app in place, let’s generate some data and render it on screen. Our goal will be to have a little cylinder for each data point.
First, we generate the data (or you could load your own!) and pass it as a prop to our vis component. We’ll start with 1000 data points, of the form
[{ id: 0 }, { id: 1 }, { id: 2 }, ..., { id: 999 }]
Nothing too fancy there, just normal React things. Now, over in the ThreePointVis component, we need to render a cylinder for each point, so we’ll map over the data prop and return a mesh for each entry. We’ll also move the camera back to 40 on the Z-axis so we can see the data.
Here we set an arbitrary number of items (30) to wrap at which makes a grid of cylinders. Looks just like normal React, right? This is the beauty of react-three-fiber at work.
This is working pretty well here at 1000 data points, but what about 10,000 or 100,000? How far can we go? On my machine, 100k basically kills my browser with this approach. But we’re living in the wonderful world of GPUs, so we should definitely be able to get up there, we’ll just have to modify our approach.
To get 100,000 points with high performance, we need to switch to using InstancedMesh instead of Mesh. Now as with all things performance related, things will get a bit more complex as we head down this path, but really in the end it’s not too bad. Let’s see how it all shakes out next.
Step 3 — Using InstancedMesh for 100,000 points
We need more power! And it turns out we’ve had it inside us all along.
The main difference you need to know about Mesh vs InstancedMesh is that with InstancedMesh we have one big root object that contains a transformation matrix for each instance (or data point in our case) that we want to render, whereas with Mesh we have individual objects each with their own matrices. Behind the scenes InstancedMesh lets the GPU be smarter about how it renders the geometry and as a result reduces the number of (slow) draw calls we need to make to see our scene.
So instead of setting our positions and rotations directly on Mesh objects, we need to update matrices that represent these positions and rotations. Luckily, Three provides a number of mechanisms that make this relatively painless once you’ve got an example to follow.
The first step is to tear out our familiar {data.map(...)}
React code and replace our <mesh> with an <instancedMesh>
At this point, however, I prefer to extract the rendering of the points into their own component before our ThreePointVis component gets too complex. Let’s call it InstancedPoints. We can update ThreePointVis to use it, just like any normal React component:
Now, let’s see the complete InstancedPoints component and break it down.
Instead of mapping over the data points, we tell the InstancedMesh how many instances there are (via the args prop) then set the individual positions and rotations for each instance in the useEffect hook. When the number of data points changes, we re-compute the layout for them (in this case, re-using the 30 column layout from earlier). We make use of scratchObject3D to do the hard matrix math for us instead of setting prop values.
A couple of things to note:
- You must set the needsUpdate flag to true:
mesh.instancedMatrix.needsUpdate = true
in order for your changes to be rendered on screen. - I sneakily added a frustumCulled prop to InstancedMesh. Without it, it seems Three will remove all instances from the screen when
[0, 0, 0]
(more or less) is off the screen. There may be other smarter ways around this, but this is what worked for me.
With that, we’re all set up to use InstancedMesh! Let’s see how it does at 100,000 points by updating our data generation in App.js.
const data = new Array(100000).fill(0).map((d, id) => ({ id }));
Note it’s laggy for me when nested in CodeSandbox but works fine when popped out on its own. I’ll use 10,000 for the sandboxes here, but try it out on your own!
All right we can efficiently render a ton of points as fancy 3D meshes now, so let’s try putting them in different layouts beside the 30 column layout we’ve been using thus far.
Step 4 — Computing Different Layouts
In this step, we’re going to add support for rendering the points in two different layouts: a square grid and an Archimedean spiral. Since you may end up with a lot of different layouts, I find it convenient to put the layout code into its own file: layouts.js.
We’ll borrow some practices from layouts used by d3 where the actual positions of each data point are stored on the data itself. There are other ways, but this has worked well for me and is efficient. This differs from how we have done it up until this point where we just computed the (x, y, z) tuples for the instances right where we updated the instance matrices.
Grid Layout
We’ll start by writing the function that assigns the (x, y, z) values to each data point
Note we use * 1.05
as a simple means of providing a modest amount of spacing between the points. We offset by -numCols/2 and -numRows/2 to center the grid at the origin (0, 0).
Next, we’ll create a useLayout hook that will apply this layout when the data changes.
As we add more layouts, the hook will grow, but this structure will keep it reasonably easy to work with.
Lastly, we need to call this hook from our InstancedPoints component and update the code that sets the instance matrix values to read from the data instead of computing positions directly.
The main change here is calling useLayout and reading the results from the data: const { x, y, z } = data[i]
. With that, we’ve got a square grid working.
Spiral Layout
Let’s try something a bit more fun– a spiral layout. Everyone loves spirals, even if they’re not particularly useful in this context. It’s okay to have fun every now and then, right?
Here’s the algorithm I use for the spiral which puts points at equal distances along the way.
I’d be lying if I said I totally understood everything that was happening in here, but thanks to the wonders of the internet, you too can get equidistant points on a spiral without knowing exactly how it works. I left in some factors on radius (* 1
) and theta (* 0.8
) that you can play with to adjust how tight the spiral is wound and how close the points are.
I recommend adding something fun to the z position to get weird tunnel-like effects (try datum.z = i * 0.05
).
To use it, we update our useLayout hook to take the value “spiral”:
Then set the layout in InstancedPoints accordingly:
useLayout({ data, layout: 'spiral' });
So we’ve got two layouts, how about swapping between them?
Toggling Between Layouts
This is another part where react-three-fiber really shines: we can do the normal types of interactivity and declarative programming we love while working with 3D components. Let’s add a toggle that switches between our two layout options.
First, we’ll add some state to App and pass in layout as a prop to ThreePointVis.
Now we just need to pass the layout prop through ThreePointVis to InstancedPoints and update our instance-matrix-updating useEffect hook to have layout as a dependency.
And with that, we are now able to swap between layouts. Check it out below!
Swapping is cool and all, but you know what’s cooler? Animation. Let’s tackle that next.
Step 5 — Animating Positions
Inspired by projects like Microsoft SandDance and Google Facets, our next steps will be to get these cylinders flying around the screen. To do so we’re going to use react-spring which will give us some organic looking animations while also making it easy to do most of the animating outside of React, giving us better performance. On my machine I’m able to get smooth animations at 100,000 points with this approach.
Before we begin, I’ve got to give a shout-out to Paul Henschel. He created both react-spring and react-three-fiber. It’s definitely worth following him on Twitter as he’s always doing something cool.
Remembering Source and Target Positions
Ok so let’s get started! The first thing we need to know to animate is where we’re going (the target position) and where we’re coming from (the source position). To accomplish this, we’re just going to jam more properties on to the data objects themselves because I’m lazy and it works. You could also do something involving associative lists without tarnishing your data, but sometimes it just feels good to be bad.
There are a lot of ways of doing this. We’re going to wrap it up in a small hook called useSourceTargetLayout.
Note: It might be better to have just a single useEffect here and refactor parts of useLayout for reuse, but this seems to work too.
All we’re doing is storing the current position, or (0, 0, 0) if there isn’t one, as the source position. Then we run the layout which will set (x, y, z) on each data point, which is finally copied as the target position.
Interpolation Between Source and Target
Next up, we need the ability to interpolate between these two positions. The basic idea is: given a progress value between 0 and 1, we want 0 to be the source position, 1 to be the target position and 0.5 to be halfway between them. This is generally solved by the formula
(1 - t) * source + t * target
where t
is the progress value. In many libraries you’ll see this called “mix” or “lerp” (for linear interpolation).
Our interpolation function interpolateSourceTarget looks as follows:
We iterate through all data points linearly interpolating the (x, y, z) values to be between source and target based on the progress value (0 ≤ progress ≤ 1).
Animating with a Spring
Now we need to actually call our interpolator and get these cylinders flying around. First, we’ll need to add react-spring as a dependency:
$ npm install --save react-spring
Then we’ll create a useAnimatedLayout hook that will use a spring to interpolate between source and target positions when the layout changes. Here’s the code:
Let’s break it down.
- We import from
'react-spring/three'
not justreact-spring
. I think in this case it doesn’t matter, but if you end up using theanimated
ora
imports (e.g.<a.mesh .../>
) then you’ll likely need the /three version. - We run our useSourceTargetLayout hook which will run our layout and remember source and target positions as described above.
- The useSpring hook animates animationProgress from 0 to 1.
- We use a ref to keep track of the previously seen layout value so we can compare it with the current value and “reset” the spring animation when it changes, forcing it to re-run interpolating animationProgress from 0 to 1. I’m no react-spring expert by any means, so there may be better ways, but this worked for me.
- We use useSpring::onFrame to get a callback for each tick of the animation where we can update our positions by calling the interpolateSourceTarget function. We then call the user-provided onFrame callback to indicate the data has been updated. Note this happens without the React reconciler getting involved, keeping the performance high.
With all of this, we now have some numbers changing but we still haven’t integrated it with our InstancedMesh to actually see the animation happening– let’s do that next.
Update the InstancedMesh Matrices
The final piece of getting animation to work is hooking up our useAnimatedLayout hook to our InstancedPoints component. Recall that previously we called useLayout({ data, layout })
. Instead we’ll replace that with useAnimatedLayout({ data, layout })
, but we’ll also have to provide an onFrame argument that will handle updating the instance matrices. Let’s first extract the code we had in the useEffect hook in InstancedPoints into a helper function updateInstancedMeshMatrices:
Warning! If you’re re-sorting your data array, this code will have unexpected behavior. The mesh instances are always in the same order and we’re calling mesh.setMatrixAt(index, matrix). It’s possible the index in the mesh array is different than the current index in the data array if you’ve re-sorted, so make sure you keep track of it if you’re re-sorting. On events, the mesh index is called instanceId, so it can be handy to store that on the data objects to use here.
Anyway, back to animating. Now, we can call this helper function in our onFrame callback to get our mesh to update as the spring animation is happening.
And with that, we’ve got animation! For me when I pop the CodeSandbox into its own window I get smooth animation with 100,000 points on my machine. Give it a shot and see how it works for you.
Using the GPU for better performance
After a certain threshold (perhaps over 100,000 items depending on your machine), animating always becomes a bit tricky since we need to turn to using the GPU instead of the CPU in order to keep our animations smooth. Using the GPU means using shaders, and this post is long enough as it is without talking about them, but for those that want to learn more, check out these resources:
- Three Buffer Animation System (BAS) — An extension for Three that makes it relatively easy to animate via the vertex shader. During hack week I wrote my own custom material based on MeshStandardMaterial to accomplish this, but only because I had never heard of Three BAS until Paul Henschel tweeted about it!
- An older post of mine about doing basic point animation in a shader with regl.
So we’ve got animation now, how about some interactivity?
Step 6 — Adding Interactivity: Selecting a Point
One of the great parts about using InstancedMesh is that it allows mouse events to work as we expect with very little effort on our part.
Setting up selectedPoint State
We’ll get started by putting in some scaffolding to show the selected point in other parts in the app besides the ThreePointVis component. This is handy for having panels that show details about the data you’re selecting. We’ll just do a very basic message that shows the selected ID, but the same idea applies for more sophisticated applications.
Back in good ol’ App.js, let’s add some state for the selected point and display its ID if we have one.
Nothing too exciting here, we create some state, display it, and pass it as a prop to ThreePointVis. Could’ve brought a little pizazz to this code with some nullish coalescing but it seems it’s a bit too soon still. Soon though, soon.
Ok so, how about we make use of those fancy new props in ThreePointVis? It turns out to be just like normal React event handling. 3D? More like 3Z! (threeasy. I’m sorry.)
As mentioned previously, we can recover the data item based on the instanceId provided in the event object. If we never re-sort the data, this corresponds directly to the index. If we do re-sort the data, we will have to do some more work to find the point based on the instanceId.
As shown above, we just provide an onClick prop to the <instancedMesh> then toggle the selected point as we normally would in React via our handleClick function.
With that, we have basic mouse interaction: we can click the cylinders and get a little message showing that an item was selected, but let’s take it a step further and color the selected item too.
Coloring the Selected Point
To get coloring for our selected point, we need to do a few things. First, we need to update our material to use vertexColors, which lets the instances within our InstancedMesh have different colors. Then we need to provide an InstancedBufferAttribute which specifies a color for each of the instances. Lastly, we need to change the instance colors based on which point is selected.
The first two steps can be mostly accomplished through JSX:
Here we’ve added an <instancedBufferAttribute> child of <cylinderBufferGeometry> that specifies our color attribute. Our colors will be RGB values (3 values per instance), so that’s why the number 3 appears in a few locations.
We’ve also added a prop vertexColors to <meshStandardMaterial> which informs it to look for a color attribute when rendering the meshes. Note that in our <instancedBufferAttribute> we attach it as attributes.color to the geometry, which is how the two get connected.
Now, the code above doesn’t provide any actual colors (colorArray doesn’t have reasonable values), so you wouldn’t be able to see anything if you used it as is. To handle the actual coloring, let’s create a usePointColors hook that will set the colors based on data and the selected point.
So what’s happening here?
- It creates colorArray, an array to store all the colors for each instance. We fill it with values by using scratchColor.toArray().
- It create a reference colorAttrib which will hold the InstancedBufferAttribute reference
- When the selected point or data changes, we recompute the color for all points and notify Three that the color attribute needs to be re-interpreted. This isn’t the most efficient way, but it works well enough.
We then use the results of this hook as the props for the <instancedBufferAttribute> in InstancedPoints
And with that, we’ve got coloring!
Niceeeee, but if you try clicking and dragging around, you’ll see that it keeps changing the selection when you release the mouse. Dang, just when I thought I was out, they pull me back in.
Preventing Selection on Drag
To prevent selecting new items when dragging across the vis, we just need to check if the mouse moved too far between mouse down and mouse up. A threshold of 5 pixels works reasonably well, so that’s what we’ll use.
To do this, we’ll need to add a handler for onPointerDown (react-three-fiber uses pointer events and you may need to polyfill them), and modify our onClick handler to check if the mouse moved too far to be a click. We’ll combine the logic for these two into a custom hook called useMousePointInteraction that will return the two handlers.
We store the last seen mouse down position in a ref in our pointer down handler and then compare the mouse click position against it to see if we dragged or not. Now we just need to use these handlers in our InstancedPoints component and we’ll be all set.
Ahhhhhh the glorious feeling of mouse behavior matching expectations.
Step 7 — Bloom: Adding a Glow Effect
Ok, we’ve got interaction and layout animation but we’re living in the wonderful world of 3D, let’s get some sweet effects in there. The first thing that comes to mind when I hear “bad ass 3D effects” is making things glow, otherwise known as “bloom”. Let’s see how we turn on a bloom effect and then spruce things up with some additional lights.
Note: Adding in these effects may slow down animation performance.
Adding Effects via <Effects />
Turns out react-three-fiber has a bunch of amazing examples of how to add effects to your 3D work and the library makes it really easy. We’ll create an Effects component that we just add as a child to our <Canvas> and that’ll be that. Really. We’ll add two effects: UnrealBloomPass for glow and FXAA for antialiasing.
So there’s a bunch of stuff going on in here that happily you don’t really need to know much about to get working. Or at least I didn’t need to know much about it anyway. The general idea is post-processing effects work by doing multiple passes through the scene and adjusting it each time. Here we do three passes:
- First we render the actual scene with a <renderPass />
- Then we add the glow by inserting an <unrealBloomPass />. You can play with the bloom arguments to adjust how intense the glow is.
- Lastly, we use a <shaderPass /> with a FXAA shader that antialiases the scene and importantly renders it to screen (note the renderToScreen prop)
For the most part, the only thing I play with here is the bloom arguments. You can insert other effects in the composer too, check out the Three docs for more ideas.
Now we just need to add the Effects component as a child of ThreePointVis and we’ll be good to go.
So what does it look like at this point?
Dang that’s some seeeeerious glow. Some may argue too much, some not enough. But for our purposes, I’d like to tone it down. The problem here is that all of the cylinders are #FFF white, the brightest possible color and bloom works by making things over a certain brightness level glow. I only want the selected items to glow for the most part, so my solution is to reduce the default unselected color from #FFF to something lower like #888.
Note: there is such a thing as “selective bloom” that may work here as well, but I’ve never used it.
To adjust the colors of the non-selected points, we just update the DEFAULT_COLOR const in InstancedPoints.js.
const DEFAULT_COLOR = "#888";
Ok so that tones things way down, and we’ve got some mild glow going on. We could play around with the bloom settings to make the glow more intense on the selected item, or we could choose a brighter color for it, but instead I’d like to add in some lights.
Light It Up
First, let’s try adding a light directly above the selected point and see how that looks. All we have to do is modify our InstancedPoints component to add a <pointLight> when we have a selectedPoint.
We take the position of the selectedPoint and add 0.3 to the z position to get the light slightly above it. Note we use a <group> since we know we’ll be adding another light momentarily. And with that, we get some extra glowiness to our selected point:
Ooo not bad, not bad. But we can go further! And go further we shall. Let’s add a light around the point that lights up those adjacent to it. To do this, we add another <pointLight> with a bigger radius and place it right in the middle of the object.
Again, I just played around until I found some settings I liked. Here’s the result:
Now we’re talking! But if you try switching layouts, you’ll see the lights don’t animate with the point. Oops. Nothing free in this world. Since our point animation is kind of happening behind the scenes, we’ll have to do some trickery to get the lights to update with the point.
First, we update useAnimatedLayout to return the spring animation props.
Next, we need to make our light group animate along with the points.
To do this, we switch from using <group> to <a.group> with the a from react-spring. Then we just have it re-read the current (x, y, z) position of the selected point as the animation runs since we know our spring will be recomputing it each tick.
That’s it! Here’s the demo:
One final touch and we’ll be done.
Step 8 — Finishing Touch: Resetting the Camera
This is kind of a bonus step, but I find it really helps making the UI less painful to use. When navigating around with the TrackballControls or OrbitControls, it can get annoying to get back to a base position. Sometimes you just end up in some wonky rotation and can’t get out, so I like to add in a button that will reset the camera position. Now there’s probably a better way to do this than the way I’m going to show you, but I didn’t know any better so here it is. We’re going to put a button outside of our ThreePointVis component that changes the camera position via refs.
First, let’s add the button to the UI via App.
We add a ref visRef to our ThreePointVis component, a button and a click handler that tells visRef to reset the camera.
Now we just have to create this resetCamera handler within ThreePointVis. My solution here was to use useImperativeHandle and forwardRef, which pretty much always feels bad, but heyo it works in this case.
Let’s get those refs going down the line. Here we go!
First we update ThreePointVis to have a resetCamera function that it passes through to the Controls component via controlsRef.
We do similar things to the Controls component except here is where we have access to the camera and controls object to do the actual resetting. A few important things to note:
- We set the target of controls back to the origin (0, 0, 0). This is where the camera will orbit around and is the center of the screen.
- We reset the camera position back to our original (0, 0, 80) coordinates. We could use a prop for this, but this is good enough for our purposes.
- Since we’re using TrackballControls, we need to also reset the up vector of the camera. Conveniently, this is stored as up0 on controls, so we copy it over.
- Lastly, we indirectly call controls.update via our useFrame hook which updates the view.
In my hack-week project I used animation to tween back to the original position, but I’ll leave that as an exercise for the reader. This was the last step in our tour through the world of 3D. Hallelujah.
Please Say This is The End
Holy smokes, we did it!! Thanks for reading, and congratulations for making it this far. If you’ve got any questions or comments about this post please feel free to reach out to me on Twitter @pbesh any time.
Best of luck in the wonderful world of 3D!