History, Volume I

Alex Hornstein
17 min readJul 15, 2018

--

Whoever said that necessity is the mother of invention wasn’t trying to make a volumetric display. No, panic was the mother of this invention, panic and a deep desire to see a science-fiction fantasy become reality. And this is the story of the invention and of the last two years of designing, prototyping, testing, and occasionally panicking.

Flashback to three years ago: Shawn Frayne and I are running independent invention labs in Hong Kong and Manila. We’re working together on machines that make low-cost solar panels for rural, unelectrified areas. The work is fascinating, but it doesn’t exactly pay the bills, so we spend a third of our time doing contract invention for a Fortune 500 company, and that pays for everyone’s time. It’s not just us, too — Shawn has four other people working his lab, I have two in mine. This arrangement had been working well for a couple years, and gave us a stable way to work well into the futur…oh, what’s that? The fortune 500 company is having financial problems and just gave us a month’s notice that they’re cutting off our contract? Cue the panic!

Panic gets a bad rap — most people think of panic as a disorderly, emotional mess, but panic over the future, looking down the gun barrel of onrushing financial disaster, losing an extremely talented engineering team that you’ve worked with for years and the increasingly real possibility that you’re going to have to go back home to live in your parents’ basement, or worse, get a *job* — that kind of panic is like a laser beam for the mind. Our reaction to panic is to try to invent our way out — maybe a clever new tech will change the _______________

There’s a step in making solar panels where you bond a protective layer of glass to the fragile silicon solar cells. The sturdy glass protects the silicon, but it also raises a problem — how do you keep sunlight from bouncing off of the glass and missing the solar cells? The trick is called index-matching.

[…. intro intro intro, make it more interesting]

Part 1, Volumetric Printing

Three years ago, we started experimenting with a new idea. We thought we could bring scenes from a virtual world into the physical world by slicing up the 3D scene like slicing an egg with an eggslicer, and then printing each slice onto a sheet of thin, clear plastic. By stacking up the sheets of plastic, we could reconstruct the 3D scene in the real world.

Getting these prints to work was tough, and it took us the better part of a year to perfect. Half of the problem was the software. Slicing like this takes smooth, curved surfaces like a person’s face and turns them into discrete slices, and if we don’t compensate for this effect, the prints will look like they’re made of legos.

A big part of making prints look good was learning to blend between different layers. Over months of experimenting with software techniques and prints, we were able to bring some information from the neighboring slices onto a given slice, and to “feather” the edges of the neighboring information by making geometry farther from the slicing plane more transparent, and geometry closer to the slicing plane more opaque. This blending sm0othed out the lego-like appearance, making surfaces look curved and realistic in the prints.

For anyone interested in the ins and outs of slicing color 3D content, our original slicing software is open-source and free for download here on github.

The other big part of the trick is a technique called index-matching — imagine stacking up twenty overhead transparencies. If you look at them, they don’t look clear, they actually only let a tiny bit of light through, and they look milky and mostly opaque. This is because every time that a ray of light travels through a boundary between two materials with different indices of refraction, some of that light is reflected back, and is refracted, passing through the boundary at a slightly different angle. Even if you only lose 5% of the light at every boundary, after going through 20 sheets (with 40 boundaries between plastic and air), less than 13% of the light will still pass through. Index matching means that we fill in all the air gaps with a material with the same index of refraction as the plastic, and as far as the light is concerned, it’s just one big block of clear plastic. As luck would have it, one of the clearest plastics available is acrylic, and it has an index of refraction very close to mineral oil, or as your local pharmacy calls it, baby oil.

Part 2, Moving Pictures

The prints were good, but they weren’t a great business, and more importantly, they were static, and that meant that they weren’t the sci-fi hologram of our dreams. After a year of making prints, we decided to scrape together all the cash we could spare and try to make a dynamic volumetric display.

A volumetric display is basically what we think of when we think of a hologram in a sci-fi movie. It’s a true, three-dimensional volume that’s full of pixels, or as us 3D nerds love to call ’em, voxels. The concept is just like how we recreated a 3D scene in the prints by stacking up a bunch of slices, but instead of dots of ink in a volume, we’d need to make points of light in a volume.

take 1, swept-volume displays

We did the first thing that everyone does when they try to make their first volumetric display: we got a high-speed projector (we used one of TI’s lightcrafter development kits, which can show 1-bit color depth images at 1,200 hz) and we wobbled a plane back and forth in front of the projector. At every different position of the plane, we’d project a different image, like a CAT scan in reverse. The general term for this type of display is a swept-volume display, because you’re “sweeping” an imaging surface through the display volume, drawing voxels as you go.

This worked! Here’s our very first volumetric display prototype, showing a smiley face and not syncing at all between the motor and the projector.

That jackhammer sound is a feature, not a bug. It turns out that it’s fairly loud and noisy to move a large object back and forth several inches, 20 times a second. But worse than that, the high-speed projectors were super expensive. We firmly believed that if we were going to succeed, we had to make a display that was cheap enough for people to get and tinker with. That meant that we couldn’t use a $1,000 projector as a component in our product.

We started looking for the highest framerate displays we can find. There really weren’t any cheap >1000 frames per second displays that we could find, so we started playing with making them ourselves. We made a quick version of replacing the high-res, expensive projector with a low-res, cheap 30x30 pixel LED screen.

This also worked, but all that circuitry to make a high refresh-rate LED screen was big and bulky and is hard to get small and cheap and fast, and we didn’t want to re-engineer an existing 2D displays if we could avoid it. While pondering this can of technological worms, we had another idea: what if we could take an off-the-shelf high-res, low frame-rate projector and add a mechanism to turn it into a low-res, high frame-rate projector.

iiintroducing lightfolding!

This idea felt really powerful. It meant that we could avoid the expense of a fancy high-speed projector and the engineering slog of making a custom high-speed 2D projector or screen. We called this idea lightfolding, and that idea of “gearing” between 2D resolution and depth resolution became a common theme in our thinking.

The first lightfolding prototype was made of toilet paper tubes, mirrors and hot glue. We made a kind of rotating periscope that would spin around in a circle and “grab” images from a projector that were projected along the circumference of the circle and spit them out the same spot in the center of the circle. Those images along the circumference were high-speed frames, and this mechanism was the first simple add-on that could turn a regular, el-cheapo projector into the high-speed projector that we needed to make a swept-volume display work.

The first design was just a proof of concept, though — to make a working display, we couldn’t have the mirrors move past the projected image — that would mean that the high-speed image would have motion blur. We decided instead to make a system with twenty pairs of mirrors that would catch twenty high-speed frames within a single projected image. We called this monster the Stargate.

https://vimeo.com/189591626

In the stargate display, a regular projector shines an image with twenty slices arranged in a circle. Those twenty projected slices fall onto twenty pairs of mirrors, which redirect each slice towards a unique plane at the center of the circle. A giant subwoofer shakes a piece of vellum back and forth a couple inches, scattering the light from each projected plane. And finally, there’s a spinning shutter that only allows light from one slice through at a time. I wrote some software that controlled the shutter’s position and generated an audio signal for the subwoofer that kept their relative position in sync. This whole idea was exactly as complicated as it sounds, and the machine was a huge beast with a tiny 1"x1"x1" display volume, but it was our first full-color dynamic volumetric display. It was awesome! We looked at this pile of wires and hot glue and occasionally smoldering electronics and we saw the future. And that made us unique.

The complexity of the Stargate scared us, though — we figured that we would need a couple years to improve the system to the point where it could be reliable enough to turn into a product. Fast, mechanically oscillating planes are really tricky, and electrically synchronizing the phase of two high-speed mechanica oscillators (the display plane and the shutter) is tricky. If we were going to make this system work, we’d have to get rid of some of the moving parts. We’d need that time to iterate through hardware designs, and to start getting a software pipeline that could take a world in a virtual 3D environment, slice it up, and draw it in our display.

Shawn with Stargate at 4am in the Hong Kong lab

The prospect of two years of difficult invention sounded awesome, but one thing stuck in our craw — what if we spent the next two years on this idea, but we were off the mark? What if we misread the holographic zeitgeist and made something that didn’t capture the sci-fi dream? Two years is a lot of time to waste. We wanted a way to prove to ourselves as much as anyone that there were people out there who were interested in what we were doing.

It made us take another look at an idea that we had discarded earlier: LED cubes. LED cubes are a type of volumetric display with an LED at each voxel. They’re great because they’re relatively easy to build by hand and have no moving parts. They’d been popular among makers and electronics hobbyists for years. LED cubes have a fundamental flaw, though — because they have an LED at every voxel, the cost scales with the cube of the resolution. Even if you could get a color LED at the insanely cheap price of $.01, a modest resolution of 100 x 100 x 100 voxels would cost you $1000 in LEDs alone! However, we thought, maybe we were being too insistent on high-resolution. People on the web and at maker faires loved LED cubes — they’d spend hundreds of hours building and programming their cubes, and would share it on youtube or their personal site. At the time, everyone made LED cubes by taking through-hole LEDs, bending the leds and soldering them to neighboring LEDs to form a cubic structure. Everybody would engineer their cube differently, which was awesome, but also meant that nobody could share code between cubes because they all had different hardware. And more than anything else, we were impressed by the hundreds of people we kept finding who were so into LED cubes that they would spend this huge effort making them, because it was a kind of 3D display that they could have and play with right now.

Our lead electrical engineer Samtim picked up a pencil and sketched out a design that would dramatically simplify the construction of an LED cube. Instead of manually soldering leads of LEDs, we could use traditional automated pick-and-place assembly to place a series of LEDs onto thin, reed-like circuit boards, and then we’d plug a bunch of those reeds into a motherboard to make a 3D array of LEDs. Instead of using “dumb” LEDs and separate driver circuits, we’d use a new, popular variety of “smart” LEDs with integrated drivers, so we could just pass them an RGB color value and they would take care of turning that color, saving us a huge mess of circuitry and wiring. Robots would do all the soldering, and instead of 200 hours of assembly, it would take half an hour to put one together. Samtim opened up Eagle, designed a circuit board, and a week later, we were testing our first LED cube.

L3D Cube Assembly

This was my first time working with these smart LEDs, and I was a little bit dumb about it. We had designed an 8x8x8 LED cube, with 512 LEDs that each could draw 60mA at 5V, so I multiplied those numbers out and decided that I needed a power supply that could supply an insane 30 Amps at 5 Volts. Foolishly, I bought one and hooked it up to our first prototype cube. I wrote some code that should turn all the LEDs a solid color, programmed it, and…yikes! 150W of LED light is like having a couple streetlights on your desk on full blast, shining at your face. We all immediately fled to the other side of the room and fashioned little eskimo snow goggles out of crap lying around the lab and squinted at this weird concoction through pinholes. After a moment, I turned the power supply off and on again, and the massive surge in current caused the supply’s voltage to oscillate, which promptly blew out the entire cube in a fell swoop. On to prototype number two!

Later, we got a bit smarter and realized that we could current-limit the LEDs without them complaining. We made seven revisions of this design, but they were all very similar to the first cube we made.

Shawn had a smart idea in this process. Rather than pushing for months to perfect the cube and make a video and launch a kickstarter, we should just take some photos and put it up on amazon to see if anyone would buy it. At the time, we had exactly one cube. We put it up, and about thirty minutes later, someone bought it! We were stunned — how on earth did they even find us? Surprised and excited, we put our one cube in a box, mailed it to the customer, and quickly started making several more.

Within a couple weeks, we had sold twenty cubes and we were working as hard as we could to keep producing the cubes. We figured that the experiment was complete — there were plenty of people out there who were interested in LED cubes. We pulled the cubes off Amazon and started preparing for a more complete launch on Kickstarter. A couple months later, we launched our Kickstarter campaign, and for our modest company, it was a smash hit.

The L3D cube was great as a hobbyist kit, and we started seeing some really talented people building and sharing some awesome programs for the cube. It was great to see so much enthusiasm over volumetrics, but the 512-voxel L3D cube was still far too low-resolution to feel like the sci-fi dream of holograms. An interesting side effect of a successful launch was that we started making money from selling kits, enough to keep pushing on R&D to make the dream real.

Part 3, More Voxels

Putting a single LED at every voxel is a dead-end for a cheap volumetric display — the law of cubes is a harsh master. We started a hunt for cheaper 2D pixels that we could map into 3D space. The cheapest 2D pixels in the world are on an LCD screen — thanks to mobile devices, we can get millions of pixels in a small screen for tens of dollars. After LCDs, a common source of cheap pixels is pico projectors, small handheld projectors that can project millions of pixels. They’re not as cheap as LCDs, but the projection is very handy when it comes to pulling pixels up into the third dimension.

We started experimenting with an idea that we called Nata de Coco, after the tropical drink with small chunks of translucent coconut flesh floating inside. We made a static Looking Glass print with an array of milky blocks carefully scattered throughout the volume of the print and we pointed a pico projector at the print. Each pixel of the pico projector lit up a certain block, mapping that pixel to a location in 3D space. By turning on a certain pixel in the projector, we could light up that 3D voxel in the print.

This technique was promising — we could use our unique printing technology to precisely control the arrangement and level of scattering of the blocks in our volume. We were quickly able to get to a comparable level of resolution as our largest LED cube — 16x16x16 voxels.

We had LEDs on the brain, so our thoughts started moving towards a territory favored by cheapskates and motor junkies: swept-volume persistence of vision displays.

The idea here is that LEDs are really bright and can turn off and on really fast. If you can somehow move an LED around quickly — faster than the eye can perceive, you can paint multiple points in space with a single LED, saving you money. Moving LEDs back and forth is tricky, mechanically speaking, but spinning LEDs around in a circle was just simple enough that it might work. We hatched on an idea to take just 256 LEDs — half of the number that were in the L3D cube — and spin them around really quickly, getting more than a hundred times more voxels out of the same LEDs. We called this concept the Hypertube.

Our first design had over 50,000 voxels. We had a number of challenges to get it working — we had to get power to a circuit board spinning more than 20 times a second, we had to synchronize the LED display with the position of the LEDs. We had to build a whole host of new mechanical and electrical hardware, firmware and software, and run through a bunch of tests to get things calibrated. But once we did, well, the results were pretty awesome.

The challenging thing with spinning prototypes is getting video data to a spinning display. There isn’t an easy way to do this, so in the first prototype, we pre-loaded volumetric data onto an SD card, and an on-board processor read the data and sent it out to the LEDs. Getting live volumetric data from a computer or other source would need a new architecture.

While we were talking about a new prototype, I wrote a simple volumetric simulator to look at a volumetric scene at various voxel resolutions. After playing around with the simulator, we came up with a test that we called the sheen test — in this test, we took a 3D scan of Charlie Sheen, pulled it into the simulator and showed it to a stranger. If they said, “oh man! That’s Charlie Sheen!” then the resolution passed the sheen test. If not, it wasn’t high enough resolution to recognize a person. By trial and error, we found that the minimum voxel resolution to recognize a person’s face was around 250,000 voxels.

The thing keeping our resolution low was the LEDs. In the first prototype, we used LPD8806 LED drivers that have a 4,000 Hz refresh rate. That’s enough to show 200 radial slices at a 20Hz volumetric refresh rate. The limited our resolution. Additionally, the LPD8806 driver chips were physically large, and that limited our resolution. After a bunch of searching, we decided that the way to get the fastest and tightest pitch LEDs was to make our own LED screen using the smallest RGB LEDs in the world and some custom LED drivers. We built the fastest LED screen in the world, with 64x64 LED pixels that could refresh at 12,000 Hz at 1-bit color depth.

To get real-time data to the LEDs, we had to look at high-bandwidth wireless links. A local wifi network has the bandwidth to get the full volumetric data across, if we apply lossless compression to the data. To handle receiving wifi data and sending it to the screen, we put a Beaglebone Black, a small linux computer with a wifi interface onto the back of the LED screen. The sheer weight of the components that we had to spin around twenty times a second was starting to terrify us, so we put the whole thing into a shatterproof polycarbonate housing that would theoretically protect us if the proverbial shit hit the not-so-proverbial fan.

Our first test with this new display looked pretty good, even at 3am on a cold and rainy morning in Zhong Mu Tou, but mannnn it was hard to get working. All the infrastructure in this version of the Hypertube was extremely difficult — we had even more layers of technology: software, DMA-level code on the beaglebone, and firmware on the LED driving chips, and all of it had to play together at very high speeds, and of course, spinning around twenty times a second.

The complexity of this design path was its undoing. At the time, we only had two people on the team who could write software or firmware: myself and Samtim. Even with the help of some extremely talented contractors, we found ourselves looking at a task on par with re-engineering a graphics card: we had to break down a 3D scene into raw voxel data, compress it, send it over wifi to our tiny spinning linux computer, decompress it, send the voxel data out to our LED driving microprocessors, and clock that data out to the LEDs. Yikes!

One thing that often shows up in the evolutionary progression of species is that a slow, complicated species get out-competed by simpler, more adaptable species. The same thing happens in invention. While we were slogging through engineering the complex dinosaur of the Hypertube, one of the other engineers on the team, Alvin, quietly did a little test that turned out to be a mammal. He called it the Hypercube.

Part

--

--