An exploded assembly in the part viewer

Geometry Drives Everything: The origins of our 3D part viewer

Dana Wensberg
Paperless Parts Tech Blog
15 min readApr 22, 2024

--

In 2017, I had the privilege of building the first version of the Paperless Parts 3D part viewer as a summer intern. I actually suggested the idea for the project as well…but not for the reasons you may think.

My primary motivation to create it was actually as a debugging tool. I was working on feature detection algorithms, and I simply had no way to verify if the faces I was identifying as a drilled hole or a sheet metal bend were actually correct. So, I figured I build a way to visualize it.

A sheet metal part in the Paperless Parts viewer with sheet metal bends highlighted in green

Our part viewer has since evolved into a core piece of our application, and I am so proud of what it has become. It was one of the most fun coding projects I’ve ever done, and I am excited to share how it all started.

Setting the stage

In the summer of 2017, I was an intern at Paperless Parts. I was a rising senior at Trinity College, where I was studying engineering and physics. I was subletting an apartment in Fenway for the summer.

We were an eight person company at the time:

  • Jason (CEO), Scott (CTO, my boss), and myself in a Boston co-working space at CIC Boston
  • Jay (chairman), Matt (CMO), and Steve (biz-dev) in Nashua, NH
  • Two offshore engineers Igor and Alex in Minsk

When it comes to professional software development, I was as green as it gets. I had taken just one coding class on Java and Python, and had no experience with frontend code. I did not even know how to use git. It was honestly a miracle that my boss took a chance on me (thanks, Scott). So take it easy on me…some of the JS snippets in this article are pretty brutal.

But despite my lack of experience, Scott and Jason threw everything they had at me, and I’m grateful.

How it started

For my first set of projects, I was tasked with building algorithms to extract cost-driving features from 3D models. We offer a quoting tool with a low-code pricing language that allows users to reference geometric attributes in simple pricing formulas, such as part.volume, part.area, and part.size_z. A lot of formulas ended up having lines of code like:

mat_cost_per_volume = 10  # 10 $ / lb
COST = part.volume * mat_cost_per_colume * part.quantity

Getting these high level attributes was trivial…the mesh and CAD libraries we were using had built-in properties for these values. However, for more advanced aspects of cost estimation, such as estimating the time to machine a drilled hole or bend a sheet metal part, the formulas required feature-specific references from the geometry. Unfortunately, there was no built-in part.hole_count or part.bend_count. I needed to figure out how to determine that information algorithmically from the 3D model. We called these algorithms “interrogations”.

Machined part holes increase cost to manufacture.

After racing through our additive (3D-printing) interrogation, I got started with our laser cutting and machining interrogations. I decided to start with circular hole detection. Having all holes detected would be useful for estimating cost and building downstream algorithms for several manufacturing methods (machining, laser cutting, punching, injection molding, etc.).

Almost immediately, it was apparent that I did not have a good way to verify if the faces and edges I was identifying on the model as holes were correct. I was using a headless CAD software package, with no convenient way to visualize the algorithm outputs. Frankly, I was struggling to even explain the algorithms to my boss without a visualization of the results. I needed to figure something out to demonstrate progress.

For laser cut parts, the length of the contours you need to cut directly correlate to cost.

Enter three.js and pitching the viewer project

After some digging, I stumbled upon three.js, a JavaScript library used to create, display, and animate 3D objects in a web browser. I was immediately enamored. The three.js examples and docs were awesome. It had so many examples. It almost seemed impossible how easy it was to render scenes and shapes in the browser.

I eventually found an example of how to render an STL in a basic scene. After a few hours of teaching myself how to get a local JS application running, I got the example running on my machine. After another couple of hours, with a few edits, I was able to render one of my own STL files!

It was at that moment, at about midnight on a Monday night, alone in our co-working space, I knew I had to run with this idea.

The next morning, I pulled Jason and Scott aside and showed them my demo. I explained how I needed to visualize the outputs of my interrogations, and how our users would need the same thing if we ever wanted to deploy them in production. So despite knowing almost nothing about CSS, JS, WebGL, React, git, or graphics rendering, I pitched the idea of building a 3D viewer using three.js. After Jason and Scott shot each other a quick smile, I got a very pragmatic response from Scott:

“Let’s see what you can come up with by the end of the week, and we will go from there.”

Breaking down the problem

There were two CAD file formats we worked with at the time, STL and STEP. STL files are easy to work with since they are just a list of triangles and connectivity info. STEP files require a separate pipeline to get into a format that three.js could interpret. So I decided to first focus on building out a complete demo for viewing STL files. If I could make it cool enough, I figured I would get the green light from Jason and Scott to keep going with STEP files.

Here were the requirements I laid out for myself:

  • Be able to load in an ASCII format (plain text) or binary format STL file and render it in the browser
  • Allow the user to navigate the scene by zooming in and out with the scroll wheel, rotating with the left mouse, and panning with the right mouse
  • Render a simple coordinate system (X-Y-Z) in the bottom left that would tell the user what orientation they were currently viewing
  • Allow the user to visualize the part on top of a build plate as if it were being 3D printed
  • Allow the user to adjust the orientation of the part relative to the build plate as you would when planning out a build

Let’s talk about how I built it.

Loading the STL geometry

I needed to start with actually loading the geometry from an STL file, ASCII or binary, into THREE.Geometry objects. To keep it simple, I simply placed the files in the directory of the viewer application and read the data directly from the file on mount. It was gross, but it was good enough.

I had the built in STLLoader example I started with, but it did not support loading binary STLs, so I created my own pipeline.

I wrote a parsing script for binary and ascii format STL files, and loaded them directly into three.js objects. Here is what the binary parsing script looked like:

var mesh;
var part_color = new THREE.Color( "#ED6C24" );
var mesh_material = new THREE.MeshPhysicalMaterial( { color: part_color} );

function parseBinaryStl(stl) {
var geo = new THREE.Geometry();
var dv = new DataView(stl, 80); // 80 == unused header
var isLittleEndian = true;
var triangles = dv.getUint32(0, isLittleEndian);

var offset = 4;
for (var i = 0; i < triangles; i++) {
// Get the normal for this triangle
var normal = new THREE.Vector3(
dv.getFloat32(offset, isLittleEndian),
dv.getFloat32(offset+4, isLittleEndian),
dv.getFloat32(offset+8, isLittleEndian)
);
offset += 12;

// Get all 3 vertices for this triangle
for (var j = 0; j < 3; j++) {
geo.vertices.push(
new THREE.Vector3(
dv.getFloat32(offset, isLittleEndian),
dv.getFloat32(offset+4, isLittleEndian),
dv.getFloat32(offset+8, isLittleEndian)
)
);
offset += 12
}

// there's also a Uint16 "attribute byte count" that we
// don't need, it should always be zero.
offset += 2;

// Create a new face for from the vertices and the normal
geo.faces.push(new THREE.Face3(i*3, i*3+1, i*3+2, normal));
}
geo.computeFaceNormals();

mesh = new THREE.Mesh( geo, mesh_material);
stl = null;
}

With an equivalent script to parseAsciiStl we had a pipeline to hydrate any STL into three.js objects. I combined this loading approach with the existing scene, lighting, and camera code I already had, and now I was rendering any STL in the browser!

I had this all working by late Tuesday night, probably around 11:30 pm. I was even able catch the last green line train leaving Park St. station back to Fenway.

Navigating the scene

Now that I had models rendering in the browser, the next step was to let you explore the model. I needed to figure out how to use the mouse and scroll wheel to move the model around in the scene. Luckily, I found an example of a trackball control that achieved exactly the effect I wanted.

It took some work to pull it apart and integrate it into my code, but by early evening on Wednesday I had it working. I was on a roll, so I kept going and moved on to building the axes. I wanted to build a very basic version of what CAD tools like SolidWorks and Onshape offered:

Mouse navigation + axes experience from Onshape. This is what I was trying to re-create.

I couldn’t find any three.js examples for rendering separate axes, so I had to build it from scratch. It took a ton of tweaking and way longer than I wanted, but I ended up with the following code to build a crude X-Y-Z axes that I rendered in a separate scene in the bottom left of the screen:

//create axis guide renderer, put it somewhere nice
ax_renderer = new THREE.WebGLRenderer();
ax_renderer.setSize(window.innerWidth / 5, window.innerWidth / 5);
ax_renderer.setClearColor("#ffffff", 1);
ax_renderer.domElement.style.position = 'absolute';
ax_renderer.domElement.style.bottom = '20px';
ax_renderer.domElement.style.left = '20px';
document.body.appendChild(ax_renderer.domElement);

//scene
ax_scene = new THREE.Scene();

//camera
ax_camera = new THREE.PerspectiveCamera(50, window.innerWidth / window.innerHeight, 0.1, 10000 );
ax_camera.up = camera.up;
ax_scene.add( ax_camera );

//light
ax_scene.add(new THREE.AmbientLight('#FFFFFF' , 1));

function drawAxes() {
var l = 10;
var origin = new THREE.Vector3(0, 0, -3);
var hl = l/2.5;
var hw = l/3;
var xarr = new THREE.ArrowHelper(new THREE.Vector3(1, 0, 0), origin, l, '#ff0000', hl, hw);
var yarr = new THREE.ArrowHelper(new THREE.Vector3(0, 1, 0), origin, l, '#00ff00', hl, hw);
var zarr = new THREE.ArrowHelper(new THREE.Vector3(0, 0, 1), origin, l, '#0000ff', hl, hw);
ax_scene.add(xarr);
ax_scene.add(yarr);
ax_scene.add(zarr);

//x label
loader.load( 'helvetiker_bold.typeface.json',
function ( font ) {
var l = 10;
var origin = new THREE.Vector3(0, 0, -3);
var hl = l/2.5;
var hw = l/3;
var geometry = new THREE.TextGeometry( 'X', {
font: font,
size: 2,
height: 0.3,
bevelEnabled: false
} );
var mat = new THREE.MeshPhysicalMaterial({color: '#ff0000'})
var m = new THREE.Mesh( geometry, mat);
m.position.addVectors(origin, new THREE.Vector3(l + hl - 3, 0, 1));
m.rotateX( -Math.PI / 2);
ax_scene.add(m);
} ); //end load function

//y label
loader.load( 'helvetiker_bold.typeface.json',
function ( font ) {
var l = 10;
var origin = new THREE.Vector3(0, 0, -3);
var hl = l/2.5;
var hw = l/3;
var geometry = new THREE.TextGeometry( 'Y', {
font: font,
size: 2,
height: 0.3,
bevelEnabled: false
} );
var mat = new THREE.MeshPhysicalMaterial({color: '#00ff00'})
var m = new THREE.Mesh( geometry, mat);
m.position.addVectors(origin, new THREE.Vector3(0, l + hl - 3, -1));
m.rotateX( Math.PI / 2);
m.rotateY(Math.PI / 2);
ax_scene.add(m);
} ); //end load function

//z label
loader.load( 'helvetiker_bold.typeface.json',
function ( font ) {
var l = 10;
var origin = new THREE.Vector3(0, 0, -3);
var hl = l/2.5;
var hw = l/3;
var geometry = new THREE.TextGeometry( 'Z', {
font: font,
size: 2,
height: 0.3,
bevelEnabled: false
} );
var mat = new THREE.MeshPhysicalMaterial({color: '#0000ff'})
var m = new THREE.Mesh( geometry, mat);
m.position.addVectors(origin, new THREE.Vector3(0, 0, l + hl - 1));
m.rotateX( -Math.PI / 2);
m.rotateY(Math.PI / 4);
ax_scene.add(m);
} ); //end load function
}

//axes for little renderer
drawAxes();

When I finally got this working, I let out a loud “Let’s f***ing go!” I peeked at the clock, and it was 2:00 am. I was once again completely alone in the co-working space.

The last train home to Fenway was long gone. So it looked like I was sleeping in the office. I went up to the zen room on the 18th floor, grabbed a yoga mat, my sweatshirt for a pillow, and went to bed.

Pretty epic Wednesday.

The build plate and manipulating the model

Up next was adding in some of the additive manufacturing specific features: visualizing the build plate and manipulating the orientation of the model.

It didn’t take me long to set up the grid:


size = 24; //inches
divisions = 24; //1 square inch divisions
if (units === "metric") {
size = Math.floor(size * in2mm / 10) * 10;
divisions = size / 10; //1 square cm
}
gr = new THREE.GridHelper( size, divisions );
gr.geometry.rotateX(Math.PI / 2);
scene.add( gr );

And then I set up some basic functions to flip, rotate, and snap the mesh to the grid:

function reorient(cup, dup) {
var r = Math.PI / 2;
if (cup === "z") {
if (dup === "y") {
mesh.rotateX(r);
} else if (dup === "x") {
mesh.rotateY(-r);
}
} else if (cup === "y") {
if (dup === "z") {
mesh.rotateX(-r);
} else if (dup === "x") {
mesh.rotateZ(r);
}
} else {
if (dup === "y") {
mesh.rotateZ(-r);
} else if (dup === "z") {
mesh.rotateY(r);
}
}
//then set cup to dup (current up to desired up)
}

function flip() {
if (part_desired_up === "z") {
mesh.rotateX(Math.PI);
} else if (part_desired_up === "y") {
mesh.rotateZ(Math.PI);
} else if (part_desired_up === "x") {
mesh.rotateY(Math.PI);
}
//then set desire to flip to false
}

function snapMeshToGrid() {
if (part_desired_up.localeCompare("z") === 0) {
mesh.position.setZ(center.z - bb.min.z);
} else if (part_desired_up.localeCompare("y") === 0 ) {
mesh.position.setZ(center.z - bb.min.y);
} else if (part_desired_up.localeCompare("x") === 0) {
mesh.position.setZ(center.z - bb.min.x);
}
}

I then found the transforms control example, which gave me a convenient way to toggle the original axes of the part as it was being manipulated. After wiring the transform controls in, I bounded these actions to key events:

window.addEventListener( 'keydown', function ( event ) {
switch (event.keyCode) {
case 79: // O, show the orientation of the original file
control.attach(mesh);
break;
case 88: //x, to switch orientation
part_desired_up = "x";
break;
case 89: //y, to switch orientation
part_desired_up = "y";
break;
case 90: //z, to switch orientation
part_desired_up = "z";
break;
case 70: //f, to flip orientation
user_wants_flip = true;
break;
}
});

window.addEventListener( 'keyup', function ( event ) {
switch (event.keyCode) {
case 79: // O, show the orientation of the original file
control.detach(mesh);
break;
}
});

I was able to get all of this stuff together pretty quickly. I was getting pretty good with things by now.

By early evening on Thursday, I called things done. I made a copy of all the files to another location (oh yeah…I wasn’t using git this whole time…didn’t have time to learn it), zipped it up, and uploaded it to drive.

Demo day

I had Jason and Scott gather around my desk at around 8:30 AM, and I showed everything off. This is what it looked like:

Pretty crude looking compared to professional viewer products, but I was so damn proud of it. Not bad for no frontend experience and 4 days.

Both Scott and Jason were fired up, and the path forward was obvious. I got the green light to continue working on the viewer to build out support for STEP files. They wanted me to demo it at the company meeting in two weeks.

The expanded scope for the demo we agreed on was:

  • Given a STEP file, render the part in the browser with the ability to render individual faces with specific colors (to show off detected features)
  • Allow the user to hover over specific faces to get information on them for debugging (like the index and type of the face)
  • Demonstrate how the user could toggle on/off highlighting specific features in a part

Backend pipeline

The hardest part of making the viewer work with STEP files was translating the file into a format that three.js could interpret. From messing around with the three.js examples, I figured out a few key things:

  1. All you needed was a mesh (triangles) representation of a shape to be able to create a mesh, which I could store as JSON
  2. The color of a mesh can be dynamically changed in response to user interaction
  3. You could use a THREE.Raycaster object to detect mouse interactions with meshes in the scene, and then update their appearance on hover (example)

This was the path I took:

  • For STL files, I ditched my original pipeline, and instead I used trimesh to write the triangles and connectivity map to a JSON file.
  • For STEP files, I used an open-source CAD tool to tessellate individual faces on the model, and then write those tessellations to JSON. Having tessellations for individual faces made it easy to update colors and work with the THREE.Raycaster
  • I wrote these JSON files to the viewer application directory, and loaded them directly from the frontend code into three.js geometries

It took me about a week to get this pipeline working, and now I had STEP files rendering in the browser!

Interacting with the model

Up next was allowing the user to interact with faces on the model. As mentioned above, there was an awesome example using the THREE.Raycaster to update the color of a mesh when hovering. I deconstructed the example and was able to integrate similar functionality into my viewer in about half a day’s time.

On hover, I wanted to be able to render information about the face, such as its face type (cylinder, plane, cone, etc.) and index. This would be really useful for debugging. This took a long time to figure out, primarily due to my weakness with JS, CSS, and HTML. Eventually I figured out a way to plumb information from the hover entity up to a simple div element beneath the 3D scene.

var raycaster = new THREE.Raycaster();
var INTERSECTED;

// this was a class I created to manage STEP loading
var part = new STEP(path_to_json_file);

function updateHighlight() {
raycaster.setFromCamera( mouse, camera );
var intersects = raycaster.intersectObjects( part.group.children, true );
if ( intersects.length > 0 ) {
// check to see if the object has a name or not that is in the part objects
if (!(intersects[0].object.name in part.objs)) {
return;
}
if ( INTERSECTED != intersects[ 0 ].object ) {
if ( INTERSECTED ) {
part.objs[INTERSECTED.name].switch_highlight();
}
INTERSECTED = intersects[ 0 ].object;
part.objs[INTERSECTED.name].switch_highlight();
if (is_enabled.info_hover) {
// highlight_text is a dom element
highlight_text.innerHTML = construct_display_string(part.objs[INTERSECTED.name]);
}
}
} else {
if ( INTERSECTED ) {
part.objs[INTERSECTED.name].switch_highlight();
}
INTERSECTED = null;
if (is_enabled.info_hover) {
highlight_text.innerHTML = scene_defaults.part_text;
}
}
}

With hover interaction done, all I had left to do was build a simple UI to demonstrate how the user could toggle detected features. I integrated the basic hole detection algorithm I was building at the start into the script that converted the STEP file faces to JSON. It called out certain faces with a darker blue color if they were identified as a hole. I then added a simple toggle to change the colors of these faces.

And with a couple days to spare, I was ready for the demo.

Demo day 2

The Nashua and Boston folks sat down in a conference room and the Minsk fellas dialed in via zoom, and I had the stage. Here is what the demo looked like:

The demo was a hit. The Nashua folks were blown away. I’ll never forget the look on Jason’s face when he exclaimed to the group:

So yeah…this is the epic shit we have been working on.

Here is a picture of us at Trapology Boston later that day for a team building exercise:

Closing out the summer

In late August, after enhancing the viewer and using it all summer to debug interrogations, I started the process of packaging up the viewer code to get it deployed to production with Scott’s help. We finally got it into version control and began retrofitting it to work within a React application. By mid September, it was in production.

Before I returned to Trinity for my senior year, I made one more contribution to the viewer. I wanted to leave an easter egg that could be activated by a special set of keystrokes. Selfishly, I also wanted to leave my mark. So when holding the keys SHIFT + D + A + W (my initials), the part would disappear, and “Geometry Drives Everything” would appear:

Geometry drives everything!

Note: This easter egg still exists in the viewer today!

Wow. What a summer.

The viewer today

The viewer today lives on as a fundamental piece to our product. We have invested literally thousands of hours of work, and many thousands of lines of code, to get it to where it is today. Other than some three.js calls, the viewer code is unrecognizable from my first version. Here is what it looks like currently:

The viewer today boasts dozens of features, including full assembly support, a collaboration tool, and a measurement tool.

Working on the viewer has been one of the biggest privileges of my career so far. I am very grateful to have had this opportunity and look forward to making it even better in the future.

Till next time. Stay gritty out there.

--

--