

Going beyond the screen
Experiments with tvOS, WebGL, Fadecandy, and projection mapping
We’re always on the hunt for capable devices in small form factors. They’re great for installations where small size and beefy graphics are important. The new Apple TV and the Siri remote seemed like ideal hardware to explore some new interactions.
We’ve been using WebGL a lot, and we were curious to see if WebGL content would run on the new Apple TV. We also wanted to experiment with custom interaction gestures on the new Siri Remote.
Spoiler — WebGL didn’t work out as well as we hoped, but we stumbled upon some alternative use cases that got us thinking about how to extend the user experience beyond the screen.
First experiment — interacting with WebGL via Siri Remote
Like any good experiment, we let our curiosity drive us. Interacting with web tech like WebGL isn’t common for tvOS and there didn’t appear to be many folks in the community writing about it. So we spent some time trying to determine if this was even viable.
TvOS and WebGL
The first step was to get WebGL running in a native tvOS app. Right off the bat, this felt a bit “dirty”, but in the sprit of rapid prototyping we forged on. TvOS doesn’t offer a built-in web browser, which is understandable considering Apple makes several other devices that are way more appropriate for browsing the web than a TV. So we had to dig a little deeper.
We discovered there is a UIWebView control that can be used in a tvOS app, but the class has been marked as prohibited, meaning the app would likely fail the approval process for App store submission. Since our primary use case would be geared towards a one-off installation we decided to try it anyway.


A short time later we successfully repurposed some WebGL content from our homepage in a native tvOS app. This helped us answer a couple of key questions:
- Does tvOS support WebGL and Three.js?
- Can it render a scene at 60fps without noticeable frame drops?
- Do pixel shaders have a big impact on performance?
Answers: UIWebView control supports WebGL. The performance was okay, not as good as a MacBook Pro, but similar to an iPhone or iPad which makes sense since share similar hardware. Pixel shaders caused a pretty big drop in performance, especially when scenes contained a lot of motion.
Interacting with the Siri Remote
The next thing we explored was analyzing the data from the Siri remote to create custom gestures to control our scene. Getting access to the raw remote data was straightforward. However, controlling the UIWebView directly with the remote was tricky, and in the end didn’t give us the type of control we were looking for.
So we resorted to WebSockets to pass information between the remote and the web page. Setting up the socket connection was easy, just a couple of lines of code with Socket.io.

The data coming from the remote was very smooth and high-quality. With minimal massaging we were able control the WebGL scene with extreme precision and no noticeable lag.
We leveraged the gyroscope, acceleration, and trackpad data to detect a few custom gestures. Handling the data was a similar process to working with the HTML5 Device Orientation APIs. One unique gesture we created was a “fishing” gesture; clicking and holding the trackpad button while “pulling” towards or away from yourself zooms the scene in and out. It took a little tweaking, but in the end it felt very intuitive and easy to use. The video below shows the final output of our first experiment.
Our Learnings
As we ended our first round of experiments we walked away with a couple of realizations:
- WebUIView on tvOS isn’t the best option for high-performance rendering. It was slightly slower than we hoped and the render quality didn’t “feel native”.
- Controlling a web page in tvOS through WebSockets is dirty. We realized about halfway through it would have been better to use native frameworks like SceneKit rather than force the tech we felt more comfortable with.
- Websockets could be used for off-screen interactions. WebSockets weren’t a great option for controlling a webview, but it became apparent they could enable some pretty interesting off-screen scenarios…which led us to our next experiment.
Second experiment — extending the experience off screen
Originally, we didn’t set out to explore off screen scenarios. It was a byproduct of the WebSocket solution that gave us the idea to exploring how additional hardware could create a more immersive experience.
Going native
One of the first things we decided to do was re-write the app with native frameworks. The first experiment was written in JavaScript, WebGL and Three.js. Porting the code to Swift and SceneKit was pretty easy, SceneKit and Three.js have a lot of similarities which made it easy to connect the dots. Not surprisingly we saw a performance and overall render quality improvement when we finished.
Integrating custom hardware
Ambient lighting was the first thing that came to mind when we were thinking of ways to extend the experience beyond the screen. Luckily, we had a couple FadeCandy controllers and NeoPixel lights hanging around the office from a previous experiment. So we quickly pulled together some components for a custom controller board.


The FadeCandy controller comes in a convenient Node.js flavor that worked perfectly with the WebSockets solution we put together for the first experiment. The only tricky thing was figuring out how to map the data from the Siri remote to a format that played nice with the FadeCandy.

Next, we lined a few strips of NeoPixels around the perimeter of the TV and mapped the intensity of the lights to the acceleration values from the remote. The first time we fired it up, we were all surprised at how much some basic lighting added to the experience.
Adding projection mapping to the mix
At this point we were pretty happy with how the experiment was evolving. The only obvious thing missing was projection mapping! We were inspired by the Microsoft Illumiroom project from a couple of years ago, and decided to augment our screen visuals with some projected graphics.
Continuing the scene into the viewer's peripheral vision really took the experiment up a notch! Setting it up was easy, we simply connected a separate laptop to a projector running a full screen web browser which was also communicating via WebSockets.
Overall, we are really excited about how this experiment came together. I’m confident the hardware and software solution that came out of this could be leveraged in a variety of installation scenarios.
Wrap up
The latest round of hardware and software frameworks have made stringing together these kinds of prototypes fast and relatively painless. Just a few years ago this kind of project would have required a real time investment and a lot of disparate skill-sets. Thanks to projects like Socket.io and Fadecandy prototyping with hardware and software keeps getting more accessible.
This experiment certainly sparked our imaginations and left us wondering how alternative hardware integrations and interactions can be used in conjunction with a big screen. I personally would love to see more engaging interactions like this in, say a museum or gallery setting where the exhibits tend to offer limited or no interaction.
What’s next?
I often get asked, “So, what’s the point? What are you going to do with this?” And the answer is, we don’t know yet. We’re perpetually exploring how new technologies and design can be used to engage our customers in fresh ways. These experiments result in “building blocks” that give us the domain knowledge to execute client projects faster and with less time spent on the “heavy lifting” and more time on crafting the unique details.
We hope you enjoyed hearing about our process. If you’re interested in working together on these types of experiences send us note at [email protected] or give us a shout on twitter @truthlabschi or truthlabs.com