50,000 LEDs — The QX50 Process

mcdopsa
mcdopsa
Nov 29 · 20 min read

When I first became aware of the QX50 project in early May of 2018, the project was in its preliminary concept stage. Infinity wants to create an interactive installation to promote their upcoming ‘QX50’ suv, it will utilize the full-size vehicle itself as a canvas, and it will be displayed for public audiences at arts festivals and trade shows.

These limited constraints gave us plenty of freedom to experiment with different techniques, technologies and creative solutions, allowing us to think holistically about the experience when making important technical and creative decisions.

As one of the tech-focused members on the team, I tasked myself with understanding the technical operation from end-to-end, in order to suggest workflow and technology optimizations, bridge the communications gap between the largely interdisciplinary team, and create effective prototypes that satisfy as many objectives as possible, reducing iteration and development time.

I ended up with several big questions to ask myself, in turn sparking their own questions:

How are we going to turn the vehicle into a canvas?

  • What display medium (projection mapping, lasers, LED)?
  • What visualizer / software will control the graphics?
  • What computer hardware, cabling and power will be required?

How are people going to interact with the visualization?

  • What cameras or sensors can detect motion at a high resolution in realtime?
  • How many cameras are required to cover the entire car 360?
  • How will viewers know what to do / if they are contributing?

Building the First Scale Model

Interactive installations are very complicated sets of systems with many parts, which can be tricky to discuss without good visualizations. In my experience, SCALE is the easiest thing to get wrong, and the most helpful to have visual aids for. This is mostly because scale is a relative measurement, tainted both by our perspective and perceptions. ‘How big is 1m? Well, that depends… how far away is it?’ To alleviate this issue, I hopped straight into Unity3D, my favourite way to visualize 3D spaces, to build a scale model of the upcoming installation. Using the ProBuilder toolkit, a free 3D model of an SUV, and a Mixamo avatar as a person for size, I blocked out the basic installation room, fit with wall-mounted projectors (an assumption) and 6 car-mounted cameras.

Any time you are building a model, prototype, or ‘slice’ of a final project, it is extremely important to contextualize what particular aspect it is representing, and what it is not. This is not intended to be a 1:1 scale reference for accurate measurement, this does not feature the genuine 3D model of our target SUV and this is not a demonstration of what the visual output will look like. This is a rough spatial model of the installation, intended to provide the designers with an approximate size comparison between room, person, suv, and walking space. This allowed the team to make critical design and workflow decisions, like expected sightlines, without having to build anything physical. Build it in 3D, keep everything to scale, put on a VR headset & call it a day. No carpentry required!

Choosing a Display Medium

With any large-scale digital installation, imaginations are bound to run wild with the seemingly infinite creative possibilities. One of the best ways to constrain this firehose of ideas, is to break down the objective differences and affordances of the possible display technologies — the proverbial, “Yeah, but how are you going to actually see the thing in person?”

We need to put digital visualizations (effectively a video stream) onto the entire surface of the suv, for an active audience 1–2 feet away. Projection Mapping initially seemed like a good fit, allowing full surface coverage of the vehicle, various projection mapping tools are available / could be confidently prototyped, and projector brightness should be okay for a close-up audience.

That said, projection mapping’s requirements make it more challenging to work with creatively, especially when considering occlusion. As illustrated in the scale model, projection mapping at such a close range means that the individual themselves may get in the way of the projection, causing a huge shadow and ruining the effect.

As you can see, the problem gets worse the lower the projector is to the ground, and the closer the viewer is to the SUV. The only way to avoid occlusion entirely, is to mount all projectors to the ceiling, and live without images on the side of the suv. We knew that this was unacceptable, and determined that projection mapping will not suit our needs for this project.

Instead, we just need to use a display that is self-illuminating, low latency, and the exact shape of an Infinity QX50… Right.

With those stipulations in mind, the most promising display technology was Pixel LED Tape. Instead of illuminating the large vehicle with a 360 array of ceiling mounted projectors, we layer the entire surface with adhesive strips, fitted with individually controllable Pixel LEDs, effectively turning the QX50 itself into the screen.

Pixel LED Tape:

  • Adhesive tape on one side, electronic PCB on the other
  • Very bright, designed for large signs in daylight (Times Square, Vegas, Y&D)
  • Each pixel contains full RGB, millions of colours
  • Each pixel can be controlled individually
  • Mapping software allows for custom layouts, any size and shape

Confident in our selected display technology, we were met with a new set of technical considerations:

Resolution and Pixel Density

  • How many pixels per inch (ppi) are needed to resolve an image, from 1m away?
  • What is the maximum pixel density, given they are fixed to a strip?
  • How much tape do we need to cover the entire car?
  • What is the total rendering resolution of this ‘screen?’

Power Consumption

  • Every few hundred Pixels, an additional power supply is required. How many total?
  • How much power do all those fully loaded power supplies consume?
  • How much heat do they generate?
  • Where can they safely be stored, transported and operated?

Maths & Measuring in 3D

Pixel LED Tape is available in 3 main densities: 30px/m, 60px/m, 90px/m, we were quickly able to calculate layout and coverage options, with accurate pixel density and pricing figures. First, we calculated the SUV’s surface area by boxing out a scale model in Blender.

All of our purchasing and layout calculations will be based on these measurements, and once committed, it’s not a trivial task to rework components or order quantities. It is imperative that we have an accurate, if not liberal estimation of surface area. Underestimating would be disastrous, leaving us without enough LEDs to cover the entire surface nor extra rolls to account for any reasonably expected defects, damage and maintenance needs. Irresponsibly overestimating could increase the cost beyond a reasonable level. We built our measurements to be ever-so-slightly larger than we expected would be necessary, ensuring we were over-prepared for any major defects, damage or technical surprises.

Our rough Surface Area measurements of the QX50 came out to ~25000000mm², which allowed us to calculate maximum pixel densities, assuming little to no space between the LED strips.

We learned that things can get unreasonably expensive and overwhelming, very quickly. I built a spreadsheet calculator that broke down the different parameters we could adjust, (PX/m, distance between strips, price) and let us quickly iterate between different combinations.

We landed on a mixed-density layout, with tighter coverage on the front and sides, light coverage on the trunk and roof. This tradeoff allowed us to prioritize the visible portions of the display, packing them with as many pixels as possible, saving cost, power, and camera coverage on the areas not seen by viewers. Using the same number of pixels but keeping the density even throughout, there would have been a much larger ‘margin’ gap between the strips, breaking the illusion of a single wrapped ‘screen.’

Choosing the Depth Camera

For another project ‘MASSIVE’ in the same year, Array of Stars used the X-Box Kinect V2 for short range, full body tracking — and it performed very well in real time. However from my early mockups in Unity, I was expecting to need ~6 independent sensors with for full 360 coverage, which presents some challenges if using Kinect V2.

Notably, Kinect V2 was originally designed for the X-Box One, and only works ONE AT A TIME on a single computer. This would mean building, powering, and synchronizing 6 computers just to run the cameras. While this is possible, I’ve seen impressive demos and arrangements with 4 Kinects sharing point cloud data live streaming from multiple computers, those require very complicated custom designed networking systems, and the extra cost of 5 computers would cut into budget for creative effort and installation hardware.

Instead — we wanted to power everything from a single computer, ensuring all of the depth camera processing, graphics simulation, and LED Mapping can be controlled and tested from a single location. Though there were a few other non-Kinect V2 depth cameras on the market, the brand new Intel Realsense D415 was the clear choice. Shaped like the iPhone’s notch, the Realsense D415 is specifically designed for multi-camera arrangements, is less than ¼ the size of the Kinect V2, can be powered & controlled with a single USB cable, and processes depth at 720p 30fps.

Strongly believing in this sensor as the best choice, we ordered a handful of units for early testing. Most important was testing performance of multiple cameras on a single computer simultaneously. In Intel’s own documentation for these devices, they recommend using a unique USB 3.0 Controller per sensor for best performance — no multi hub dongles. Most computers do not have very many USB Controllers, as they are often shared between 2 or more USB Ports. To alleviate this issue altogether, we relied on PCI-e USB Expansion cards, with 1 controller per USB port. This way we would never worry about sharing controllers between sensors, if we need extra ports we can easily add another expansion card.

Each sensor streams a video Depth Map (left), identifying the distance from camera of each pixel in the frame. Points closer to the sensor are represented in Blue, further objects are represented in Red, with Yellow/Green in between. Unlike the Kinect V2 and some other depth cameras, the Realsense D415 does not include any built-in body / skeleton tracking, nor does it have ‘intelligent’ awareness of objects or motions in the frame. This is a much simpler device that is intended to be built into a larger system of components, giving you RAW depth output as fast as it can. Though this may seem limiting, and in some circumstances it could be, it was exactly the kind of tool we were looking for. We did not care to perform skeleton tracking, gestural recognition or any other computationally expensive processing on top of our depth, because the circumstances of the installation do not require it. We knew where everything in the installation is expected to be placed, and can easily adjust the sensors to ONLY detect within our specified range. This allowed us to employ a very simple particle displacement method that powers the entire visualization.

How The Visualization Works

Based on the brief from the client, our visualization had to support 3 different ‘modes’: Ambient, Calm and Fast. Ambient (Blue), is what is shown when nobody is interacting with the installation. Like a body of water it slowly ripples, adding a subtle element of movement to the otherwise static installation. This mode is essential because it means the installation will always be beautiful and inviting to onlookers from afar. Calm (Green) is engaged whenever someone is close (0–3m) to the car. This turns the display into a ‘Mirror’ of sorts, representing the person’s body shape in realtime on the car. When you move, so does your ‘Reflection,’ head, hands, feet and all. Fast (Red) is the closest to a hidden feature / easter egg, only showing when someone moves very quickly in range of the sensors. Most often observed with fast moving arms, or someone running along the installation, it was exciting to see people try to engage Fast mode once they learned of its existence.


The entire visualization is computed as a large grid of particles in Unity, in a dynamic system built by one of our digital artists Gauthier. You can think of this grid like a big sheet resting on top of the car, where the Left side of the grid sits on the Hood of the car, the Bottom of the grid becomes the Driver’s side, and so on.

Behind the grid, depth maps from the Realsense Cameras feed directly into a “Displacement Map’. If nothing is being detected by the sensor, it reads a pure black image and does not affect the particles at all. When something is detected by the sensor, their figure reads as a white shape, pushing all particles forward, and changing their colour to green.

The closer something is to the car / cameras, the more intensified the colour change becomes, giving a sense of depth.

My favourite analogy to describe effect this is a Pin Art toy — where you imagine the pins as particles, and the hand pushing the pins as the depth map from the cameras.

From there, various parameters were adjusted to tweak the distortion properties, size of particles, speed of movement, etc. These tools were extremely handy, allowing us to quickly iterate through various looks, and respond to creative feedback with precision. These tools also made it a breeze to ‘remix’ the visualizer for another event in 2019, more on that another time.

The Software Pipeline

Unity is doing most of the heavy lifting in this installation, running the particle grid, camera inputs, and video outputs in realtime. This ran uncompiled, in-editor in PLAY mode; as to be flexible for any unexpected last minute changes from client, or unexpected environmental circumstances. If for some reason the client wanted to change the Green to Pink on the night of the installation, I wanted to be able to do that without having to recompile the entire project.

From Unity, a single Video feed is produced as output, which our LED Mapping software Resolume Arena 6 takes as input. Spout bridges this gap, a clever tool allowing us to copy the video feed from Unity directly into Resolume in software — no old school daisy chaining hardware for capture!

Resolume was responsible for all things LEDs, organizing all of the strips into ‘Universes,’ aligning them to the video signal, and converting the video into ArtNet colour values for each individual pixel. This is by far the most complicated component of the software pipeline due to the sheer scale and physical complexity of the display. Every digital component in Resolume has a physical counterpart that has to be matched both in location, and signal routing.

To function as a digital mirror, the visualization and cameras must be aligned correctly. Approaching the car, one should see their bodyform right in front of them, not off on another side. To achieve this, we first need to have a comprehensive understanding of: Where each individual LED is on the car body, Where each LED is in relation to others, What signal router / IP Address is each LED associated with, In what order is each LED arranged within IPs.

During the physical installation, our electrical hardware installers Christopher Desousa & Andrew Oliveira (HUGE shoutout to these guys!) kept extensive, detailed notes on the mapping and routing of each individual LED. Even with this guidance, retracing and aligning was not a simple task. Given a much simpler shape, like a flat rectangular billboard style display, the LED mapping could be routed in a neat, orderly fashion; as each strip length would be exactly the same length, with the same number of LEDs and in a consistent layout. This would make rerouting a breeze, allowing the digital signal to predictably line up 1:1 with it’s real world counterpart and no real distortion. We did not have the luxury of a simple OR flat shape.

To map the entire surface area of a rounded, ergonomic 3D object, means almost nothing is gird-like or straight. Even things would seem consistent and flat, like the windows, are actually not flat at all, and have plenty of variance from edge to edge. Because of this, our organization into Universes would never be perfectly efficient, there will always be hoops to jump through when matching, some Universes filled to the brim with LEDs, and others almost completely empty. Perhaps given infinite time, the electrical team could have arranged into a more efficient / easy to match pattern, but the usefulness of that time spent would be questionable. It was a real pain in the ass, but thankfully something we only had to complete once.

Once physically installed, a digital duplicate of the entire layout is created, routed and aligned within Resolume. Physically, each LED is attached to a Strip, Strips are attached to IP Controllers, and those controllers connected to the PC. Digitally, Pixels are attached to Fixtures, arranged into Universes, Universes into Lumaverses (one per IP). Each strip (fixture) of unique Length and Orientation must be created and labelled digitally for use. For example, some LED strips contained all 170 Pixels, but most were cut down to fit their particular arrangement. Making fixtures with any number of pixels is quite possible, but in addition, you must know the orientation and direction of that particular strip. Is the strip seen horizontally or vertically from the viewer’s perspective? Additionally, where do the pixels ‘Start’ and ‘End’? By default all LED Strips are labeled in the same order, but often a strip would be ordered Left to Right, and the strip right below it to be ordered Right to Left. This made it more difficult to re-use fixtures in multiple places, meaning almost every strip variation had to be identified, created and labelled for pre-use, or created while mapping. Once prepared, the fixtures are ready to be mapped!

Mapping is the process that determines what image / colour will be displayed on each LED. To control this, an ‘Output’ feed is chosen within Resolume. There are all kinds of cool effects, distortions and tools that can be used to manipulate the feed, as you might expect in a program designed to run large stage performances and concerts. Thankfully for our uses, we didn’t need to dive too deep into these tools (aside from a touch of scaling, and later a colour layer for a tint), and left most of Resolume’s settings untouched. Thus Resolume’s ‘Output’ is almost exactly the same as Unity’s output, keeping things consistent and cohesive between applications.

On top of the Output feed is where the Fixtures are organized and aligned. One by one, starting with a single IP Controller, fixtures are added to Universes in Lumaverses and addressed to the right location. While you’re mapping, you can see the LEDs output in realtime, making it visible if you are sending data to the right place, in the right format, or if at all. Visible doesn’t mean clear, necessarily, once a dozen strips are mapped, the obviousness of a single LED changing colour is much more difficult to discern. That said, with careful attention to detail, accurate mapping notes and a pair of sunglasses (LEDs are hella bright), you can align each universe relative to itself. Once the universes are aligned, they need to be arranged in a cohesive manner, such that they all appear to be connected as a single large screen, responding to input from the cameras in the correct location.

Constructing the Prototypes

Having worked through the technical requirements of the installation, and outlining the workflow stated above — it was clear that this is an incredibly complicated, challenging task that required novel education, ramp up on various technologies and software, and that unforeseen complications were to be expected around every corner. We knew we needed to make a prototype of every aspect of the installation, in order to test our own confidence and understanding both of the technology and the viewer experience creatively.

The first working prototypes we created were completely within Unity, used to simulate the ‘Rough final look’ of the car with LEDs. I stripped down the 3D model sent from client to it’s essential components, and layered them with an SUV-Shaped textured mesh from Gauthier. He built a great ‘simulator view’ allowing us to click and rotate the car from any side, while taking input from realtime realsense, or placeholder depth maps. This gave us a great visual of what to expect, a guide on how to adjust particle parameters, and a very helpful debugging tool for final installation.

As great and useful as the simulator view was, it lacked a key component of the true installation; scale. Even if we had blown up the image on a giant projected screen, the output was still 2D and would not correctly allow our creative team to stand amongst the installation. As tested earlier in the year with MASSIVE, creating an immersive full-scale model of the final installation, to be used during production by the creative team, can be hugely beneficial to catching unforeseen issues and demonstrating the actual experienced impact of the installation.

Using a VR headset, combined with a realsense camera (the same used on the day of) I put together a fully immersive viewer, for the creative team to explore in scale. For the first time, we were able to preview a full-size version of the installation, walk completely around the QX50, and even see your real body on the side in realtime. This is as close as it was going to get to the real thing before launch.

With the help of those software prototypes, the look continued to be iterated on and tinkered with up until launch. The physical prototype is what tied all of the hardware and software together, in a ‘vertical slice’ of the final installation. Using a Car’s Door, sourced from a local junkyard, our electrical partners wired up a sample version of the LED strips and an ArtNet controller. With this, my computer and two realsense cameras I was finally able to create an end — to — end test of the expected tools and software we would need to use for the final installation. Among that process was learning how to navigate the wonderful world of DMX communications, ArtNet through Ethernet, and the Resolume LED Mapping described above. With some networking knowhow from our web guru Ben, and far too long staring at a grid of super bright LEDs, I got the complete workflow together. It was a magical feeling to see your bodyform in the form of LIGHT, almost like the opposite of a shadow, and a major relief seeing the technical workflow operating as expected.

As you can see from the ‘Point of view’ video — the total brightness of the LED array is EXTREME. Notice how dramatically the reflected colour of the ceiling changes, and how bright it is in comparison to the daylight in the far right side of the frame.

Putting it together

Assembling the car for final installation was an incredible challenge. Not only was it extremely complicated, with many intricate details that needed to be perfectly aligned, we were under a major time crunch due to some early delays. For a week straight, including weekends AOS Creative Director Cole Sullivan and I spent every waking hour with this car, finding every strip that had been installed, routing it to the right location and aligning it digitally to our model. In real time, it felt like watching grass grow, seeing a few lines of pixels move added every hour. Slowly but surely, we got to every corner, found every strange or mismatched alignment, and mapped each section to spill cohesively from one to the next.

Looks great! There’s only one problem left; the car can’t travel with the depth cameras attached. Believe it or not, the QX50 with all the LEDs installed was actually still drivable, and rolled itself onto the installation riser. It was nowhere near street legal, and likely wouldn’t survive more than a light mist outdoors, so it was never driven on the roads. But also, it meant that there was no way to safely mount the depth cameras onto the bottom of the car before parking the car in its final destination. They would have to be mounted, then aligned on-site. The night before the exhibition.

After parking, a plastic tent was built over the installation, shrouding it from the rainfall that was expected overnight. In case the situation wasn’t precarious enough, I now had to remove sections of the protective plastic barrier in order to accurately test and align the cameras. It was critical, because the ‘walls’ of the tent were too close to the camera, preventing me from aligning them at ‘viewer distance.’ While potentially dangerous to the hardware, I had no other choice, and would have otherwise been guessing. To operate as safely as I thought I could, I only removed one plastic section at a time, reinstalling it before tearing down the next one. Was this a pain in the ass, that made it much harder and more time consuming to actually calibrate? Yes, absolutely. But the alternative risked leaving the entire installation vulnerable to the weather for an extended period of time and I wasn’t comfortable taking that risk.

Additionally, because of the weather, potential static front the tent, and complete lack of other people around, I was not able to power on the LEDs overnight. That meant only referencing my portable monitor and the Unity-based simulator to align as best as I could. I was still able to setup the Unity scene, arrange and physically mount all of the cameras, and perform basic technical testing to make sure everything was running as expected. Knowing I couldn’t do any more until the rest of the team arrived, I headed home for some rest at 5am.

The next day, I had to really hustle. Running mostly on adrenaline and Monster Energy, I headed back to the installation to finalize camera alignment. Turning things on for the first time, the alignment I had set blind wasn’t terrible, but definitely needed some love. I spent the next few hours getting as close to perfect as I could stand the cameras to be. Now able to employ the help of others, I was able to position people at ‘viewer distance’ right in between the two cameras, where a potential misalignment would occur. This was hugely helpful, allowing me to closely match the cameras views so that crossing over from one to the next would be imperceptible to the viewer. You could argue this is an impossible goal, and that there will always be some distortion / imperfections (and you’d be right) but the goal was to make the viewers as visually oblivious as possible to any errors.

Finally, I got the call to wrap it up. My inner perfectionist clawing for another hour or two — it was time to put the pencils down. All at once it became real. This extremely cool, secret project I’ve spent the last several months on was moments away from its public debut, and thousands of people would be experiencing it for the first time.

Working on this project was an extremely educational, humbling and empowering experience. Starting from basic knowledge of scene building in Unity, to manually mapping 50,000 LEDs on a 3D object, I learned more in these few months than I could have ever hoped to imagine. For the first time, building a scale 3D model with a Keyboard and Mouse, viewing it in Virtual reality then making it physical with light and cameras is an unbelievably surreal experience. It showed me the true power of using 3D and realtime production workflows to visualize better, iterate faster, and create more meaningful outcomes. It was genuinely heartwarming to see dazzled gazes, blown minds and excited waiving in front of the installation. I love the idea that: experiences can be built, that in the future we will interact with things and installations the way we interact with each other, that one can create something beautiful, that only gets more beautiful when someone else engages it, and that this wild installation, compounding some of the latest in interactive and spatial technology, is barely scratching the surface.

The future is friendly.

mcdopsa

Written by

mcdopsa

Find me where technology meets humanity | Spatial Developer @ Array of Stars

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade