Color My Mesh

An exploration of mesh coloring techniques with 6D.ai

Josh Welber
Patched Reality
6 min readSep 12, 2019

--

At Patched Reality, we have been working with the 6d.ai SDK for almost a year now, and have followed its rapid evolution closely. Although most of the work we’ve been doing with it has been for corporate R&D departments, we can’t help but think about how it might be applied to games. One fantasy we’ve had is to use 6D to scan a space, and then texture the resulting mesh based on rules in order to create a miniature terrain that blends with the real world objects. Imagine a kind of model railroad world in any room, with tiny forests in some areas, snow along the top of the bed, and so on. With that idea as a jumping off point, we began digging deeper. While we still haven’t achieved that fantasy vision, we thought we’d share our process and findings so far. Hopefully what follows is thought-provoking to other developers and will start some interesting conversations.

We’ve previously setup a pipeline to export mesh scans for use in our development environment (Unity) in order to accelerate development and to share and debug remotely. Without texture however, it’s often hard to understand the mesh and it’s just not as pretty (1). At the same time, however, it occurred to us that maybe we didn’t need a fully UV mapped/textured scan — perhaps just coloring vertices or faces would do the job, and might also create some fun on-device possibilities. So we set out to see if we could use 6d’s mesh and camera stream to sample colors, render them at runtime, and export colored meshes for use later.

Overall Flow

What we needed to do was pretty clear: 1) as mesh is updated by 6d, sample colors from the appropriate screen point and then 2) apply that color to the face or vertices of the mesh in some way. 3) When exporting the mesh (for which we use a 3rd party plugin) save out, per vertex, either a color or a UV coordinate from a palette.

Capturing Color

The first issues we encountered centered around the video texture 6d uses. In Unity, 6d assigns a native texture pointer to a Raw Texture UI component, and then renders it in a screen space camera canvas (in order to layer properly behind virtual content). So the first experiment was just to sample pixels from that texture asynchronously as needed. To do this, we just convert a given vertex or face center to viewport space, do some range checking, and use GetPixel32. This led to problem #1 — you can’t access the pixels of a native texture in Unity like this. Bummer. No problem, we didn’t want such a high resolution sample space anyway (the video feed is 1080p) — so we would keep our own texture, copy the entire feed periodically, and then downscale and sample out of that. Problem #2: although the Unity API works, the copy procedure doesn’t actually copy anything.

This led us to a kind of wonky (yet charming) solution — re-render the video feed with another camera, use a separate render texture of our own, then copy pixels out of that and sample. But that led to problem #3, we couldn’t render another camera after the 6d screen space canvas (or at least, we couldn’t figure out how to do it without capturing the screen, which we didn’t want). This meant converting the 6d canvas to a world space canvas, properly sizing it vis-a-vis the 6d back camera, and then setting up our camera to render what that back camera sees into a render texture (at a smaller size), and copying out the render texture to a texture2d. Success! We could now sample a vertex from this new texture whenever it was in view.

Rendering and Storing Color

Now that we could capture data from a texture of our own we had a number of new issues to solve: 1) vertex coloring by default interpolates between vertices — (see video below); 2) The mesh blocks are changing frequently, as full blocks, and so often parts of blocks are changing but are out of view of the camera. When this happens, color data would be lost.

We solved the second problem first. Instead of just capturing on demand, we would cache the results of the captured data, and key them off of a Vector3Int key — effectively voxelizing the real world space (we found that a 5cm voxel produced the best results). With each voxel we store a color32 and a quality number. The quality is based on the distance from the camera at which the voxel’s color was captured. In the new flow, every time a mesh block changed it would trigger a version change in our coloring component (attached to the mesh block) which in turn would trigger requests to get samples. Getting samples would get (and store if need be) the better of a current color sample, or a lookup from the voxel dictionary. The result was that our mesh always (or almost always) had colors for verts, even if they were off screen. Since we update our sample space only 10 times a second, it was possible we would miss something, but it usually fixed itself within a few moments.

A visualization of idealized color cache. On device, the cache would have voxels at non-mesh points from older versions of the mesh — but this gives you the idea.

Problem #1 proved more challenging. First we (rather naively) tried increasing the verts so that every vertex was unique (i.e. that none is shared between triangles). While this worked to produce solid colored faces, the performance was unacceptable even for a skunkworks demo. We then came up with two alternate rendering modes that produced interesting and useful results: 1) use a shader that turns off interpolation and simply uses the color of the first vertex for each face or 2) use a particle system, where each particle color is based on the vertex color, and is positioned at the vertex.

Initial on-device video. Color is interpolated per face.
Getting unique face colors by “unique-ifying” the verts for each triangle. Very poor performance.
Using a single vertex color for each face, with no interpolation (also trying out using bill-boarded particles)

Exporting for Use Offline

One of our goals was to produce a mesh export which could be brought back into Unity for review and use as a mesh for testing out behavior (which is crucial in an app like Babble Rabbit for the bunny to find its way,) and having real scans to test against in editor is incredibly useful. Having real scans that also have real colors would be fun.

While the OBJ format supports storing vertex colors — Unity’s default importer does not (it seems — or in any case it was ignoring our colors). This led to a quick modification to the code to support color importing (most of the work was already there in the third party plugin we were using from Octo-Dev). With that done, we wrote a little script that could import a folder’s worth of OBJ files and then either use their colors directly or as the source for other rendering options, such as particles.

Imported into unity and using the single vertex no-interpolation approach for coloring

Next Steps

We’ve put a pin in our current work while we ideate over where to go from here. One thing we observed was that in most cases, understanding the export is a very different experience if you made the scan yourself and know the space already. See for yourself — do the Unity videos below look like triangle salad or a real space? What if you had never seen the on device videos?

On device, we would like to experiment with turning the scans into play spaces in which different faces or particles have different meanings, or possibly where the player can modify their color or spatial properties. Our primary interest is not necessarily realism, but rather how to make the virtual mesh meaningful to users. One possibility is to use color/position information to create more radical virtual environments (i.e. the red carpet becomes lava, the blue books turn into ice, etc.). Our intuition is that using color information in this way will produce more evocative results for a player than completely arbitrary transformations.

We’ll forge ahead with these experiments and report on our findings in the future. In the meantime, do you have ideas or feedback for us? We’d love to hear from you.

Not surprisingly, turning off rendering of the main feed creates a very different experience on device.
Fooling around with the exhibits in the Roman wing.

Foot notes:

  1. We were inspired in part by the 6D demo showing the use of photo textures: https://www.youtube.com/watch?v=AwwU14gllS0

--

--