LiDAR in WebGL: Hello, World

brett renfer
Bluecadet
Published in
6 min readFeb 5, 2021

--

We’ve seen a lot of awesome stuff with the new(ish) iPhone 12 Pro’s LiDAR scanner, like this great post at Sketchfab. Naturally, we had to give the process a try ourselves — with an eye towards using this data in creative tech applications.

TLDR: use an off-the-shelf app to get started faster, and familiarize yourself with MeshLab if you’d like to render these very detailed captures in code.

WebGL + LIDAR scan + noise, noise, noise
WebGL + LiDAR scan + noise, noise, noise

Dipping Toes: Sketching in Unity

We do a lot of our AR prototyping in Unity, so that’s where I started. It gives you a lot of features for free, including making meshes of real-world objects and allowing those meshes to occlude objects.

First prototypes with LIDAR meshing in Unity
First prototypes with LiDAR meshing in Unity

There’s more to be shared there… stay tuned.

A big downside of Unity’s implementation is that you don’t get access to the raw LiDAR data, which includes points, distances, and colors. To play more with capturing the real world, you need to either a) roll your own native point cloud code or b) use an off-the-shelf app.

Sitescape’s pointcloud-based capture
Sitescape’s pointcloud-based capture

I chose the latter, following Sketchfab’s lead and downloading 3D Scanner App and SiteScape.

Off-the-Shelf Apps: Capturing & Cleanup

These apps are great, mostly! Most importantly for me, SiteScape natively exports point clouds PLY files, which are supported by both modeling software as well as THREE.JS, our WebGL framework of choice. 3D Scanner is more interested in making 3D meshes; cool, but not my ideal case here.

Anyone who has experimented with depth camera data knows: there’s pretty much always some processing you need to do before showing off your shiny model in code. SiteScape is mostly meant for architectural use so it a) captures a ton of points and b) keeps the origin of the model based on where you started scanning. I needed to edit the density and position of my point cloud to make it workable on the web.

Our go-to quick modeler/exporter/cleaner-up-er is usually Blender. It remains great, but sadly does not support PLY point clouds. It can view ‘em, and cannot export them. On to another tool!

MeshLab is another open-source tool, focused specifically on simplifying 3D scans for other uses (e.g. 3D printing).

Using MeshLab to simplify my mesh that has almost 2 million vertices!
Perhaps you’ll notice my mesh has almost 2 million vertices…!

I followed three steps here:

  1. Reset origin
  • Using the Transform tool found in Filters → Normals, Curvature and Orientation, I reset the origin to “Center on Scene Box”
Using the Transform tool (found in Filters, Normals, Curvature and Orientation) I reset the origin to “Center on Scene Box”
  • I checked my work by turning on the axes via the lil’ button at the top of the screen. Do that, so you don’t use your mind.
Centering the axes
Sweet, those axes are centered.

2. Translate

  • Using that same tool, I moved my y-axis down to place the origin — and our future user—on the ground
  • Make sure you select “Preview” before you apply!

3. Reduce

  • I used the catchy “Simplification: Clustering Decimation ” tool under Filters → Remeshing, Simplification, and Reconstruction to simplify my mesh
Simplyfing settings
My settings. Copy if you dare.
  • I played with a few percentage-based settings here — settling on reducing by ~2%.
  • Note: many guides will recommend using Quadric Edge Collapse Decimation to simplify. That is slightly easier, but we have no faces since we’re a point cloud. ;(
The mesh with 1.7m vertices
Before… 1.7m vertices
Mesh with 180k vertices
After, 180k vertices! It looks very sparse, but I assure you it still works

Bringing it into Code: PointClouds in Three.js

Full disclosure: this mesh could and should be simplified further! But I was keen to try it out in its (near) full glory.

Loading the mesh

Loading and displaying a PLY file in Three is fairly simple, once you’ve setup your file and added the PLYLoader external file.

var mesh, material, geo; // these become important later// setup your loader + load your PLY
var loader = new THREE.PLYLoader();
loader.load( '../3d/arches.ply', function ( geometry ) {
geo = geometry;
material = new THREE.PointsMaterial( {
vertexColors: THREE.VertexColors,
size: 0.05,
sizeAttenuation: true
});
mesh = new THREE.Points( geo, material ); // my meshes were still oriented wrong for THREE
// and needed to be scaled; you may not need these
mesh.rotation.z = - Math.PI / 2;
mesh.scale.multiplyScalar( .5 );
scene.add( mesh );});

The options in the material proved crucial

  • vertexColors : load color data from your vertices
  • size: rendered size of points; I played with this a lot for effect… it’s very small bc my mesh is in meters
  • sizeAttenuation: change point size based on distance from camera

Making it Move with Some Noise

I used the very simple SimplexNoise.js to create a “chaos” state for my points I could tween to.

  1. Setup Points: this code lives in my “loaded” function
... // cached original points
pointsGood = geo.attributes.position.array;
// noise points: array based on # of points in mesh
pointsNoise = new Float32Array(pointsGood.length);
// points rendered at any given frame
pointsLive = new Float32Array(pointsGood.length);
var index = 0;for (var i=0; i<pointsLive.length; i++){
var x = pointsGood[index];
var y = pointsGood[index+1];
var z = pointsGood[index+2];
// my use of x,y,z here is arbitrary!
pointsNoise[index++] = this.simplex.noise3D(x,y,z); //x
pointsNoise[index++] = this.simplex.noise3D(z,x,y);; //y
pointsNoise[index++] = this.simplex.noise3D(y,z,x);; //z
}

2. Tween between the two, once we’re loaded. In your animate() function…

// keep a running increment from 0 - 1
// my "pos" var started at 1 in setup()
pos = pos * .9 + .1;
// loop through and set "live" points
var index = 0;
for (var i=0; i<pointsLive.length; i++){
var x = pointsGood[index];
var y = pointsGood[index+1];
var z = pointsGood[index+2];
// lerp between "good" and "noise" points pointsLive[index++] =
THREE.Math.lerp( pointsNoise[index], x, pos );
pointsLive[index++] =
THREE.Math.lerp( pointsNoise[index], y, pos );
pointsLive[index++] =
THREE.Math.lerp( pointsNoise[index], z, pos ); //z
}
// tell Three your points are updated
geo.addAttribute("position",
new THREE.BufferAttribute( pointsLive, 3 ) );
geo.attributes.position.needsUpdate = true;

And voila! A fast-ish demo of a big LiDAR mesh in Three.JS.

Check it out here. (Yes, it takes a long time to load, still!), and if you’d like, look at my sketchy code here.

What’s Next?

Now that we have our pipeline down, there’s a lot left to try:

  • New types of scans: I’d love to scan some smaller objects, and play with simplification settings to find the balance between “good” and “portable”.
  • More efficient code: The above is a prototype, so it’s lazy. Moving manipulation of the points into a shader would make it infinitely faster, and open up possibilities of tweeting between different scans, having the points drift every frame, etc.
  • Rolling our own scanner? SiteScape is awesome because it’s free and super full-featured. But, it smartly culls a lot of data for you, which led to some underwhelming scans when I tried to capture statues in Grand Army Plaza. Since we might be more focused on museum-style use cases, we’ll eventually need to try out our own, special toolset.

Until next time, ✌️

--

--