Making an interactive artwork that you paint with your voice
How we designed and developed Miraj, a generative art app for the Apple TV
Last month, we released Miraj, our crazy contribution to the future of home entertainment. Our Apple TV app turns anything you say into a high-definition moving artwork. In this post, we’ll talk about our process of designing and developing Miraj, our new favorite studio side project.
The idea for Miraj grew out of our excitement about developing for Apple’s tvOS. The new platform for creating and distributing Apple TV apps was opened up to developers at the end of last year. We saw an opportunity to use the tvOS platform to bring our type of generative, interactive, and playful art into people’s homes.
Our early ideas converged around the idea of creating beautiful, perpetually changing artworks on the fly. We looked back to a recent project of ours, PixelWeaver, which algorithmically generates a one-of-a-kind garment from any internet search term. How could we turn this idea of remixing image search results into a TV-based artwork with killer graphics that could keep you entertained for hours?
We decided to take advantage of the Apple TV’s voice recognition capabilities: the user speaks any word or phrase into the Apple TV remote and we turn the image search results into a kaleidoscopic mandala of awesomeness.
A Miraj Appears
We started with a few Photoshop studies of the graphics, but very soon moved into designing and iterating in code. We wanted to manipulate the images in a way that made them more abstract and less rectangular, yet still recognizable. David figured out how to apply a sweet image segmentation algorithm that fragments the images into interesting shapes.
Now we needed to figure out how to make these shapes move. We went in a bunch of different directions, exploring different models of emitting the image fragments. We looked at random emission versus symmetrical, radial emission. We experiemented with how the image pieces would fill the screen. Would they be constantly in motion, independent of each other? Would they interact and push each other out of the way using simulated physics?
We played with a few effects, like blurring images as they reached the edges or leaving trails of color behind them. We even got a little sidetracked playing around with with 3D depth, spiraling the images pieces into unicorn horns.
During this prototyping phase, we made graphics in both Swift with Apple’s frameworks and in C++ with Cinder.
Bridging Cinder and Swift
Eventually, we found a look we really loved. We balanced the complexity of the photographic fragments with solid colored pieces, and made the larger pieces move more slowly than the small pieces for depth.
Once we had achieved the visuals we liked in Cinder, we needed to figure out how to bring it over to Apple TV. The first hurdle was porting Cinder and OpenCV to tvOS. This proved to be straightforward once we tracked down the specific esoteric flags Apple needs to have set.
The next was communicating between our C++ graphics code and the host Swift application. We needed the two to communicate because we wanted to take advantage of Apple’s user interface frameworks and Siri voice input. Once we understood how bridging headers work to make communication possible between C++ and Swift, we were in love. It is the cleanest language interoperation we have ever used, since in both languages it just feels like including another file in your project.
A Smooth Experience
A few techniques under the hood keep the user experience smooth and loading times low. Whenever you create a new Miraj, we process the incoming image data on a background thread — looking for dominant colors and image segments — while the Miraj plays. As new images are processed, they are added to the existing Miraj. This lets you experience compositions as soon as possible, and also results in a continuously growing kit of parts for the current Miraj to draw on. Additionally, when drawing, we leverage Cinder’s low-level OpenGL access to combine all the image cutouts into a single batch, allowing us to render significant image density on a low-power device like the Apple TV.
A Focus-Driven Interface
To save some development time, we designed the user interface to use styled versions of Apple’s tvOS interface elements. The Apple TV uses a different interaction model than touch-screens or computers. It’s a focus-driven interface, which means something is always selected. When you arrive at a menu, you don’t push a button directly; you use a small touchpad on the remote to shift focus from object to object until you get to the right button. Our interface design uses large buttons that grow when in focus to make it very clear which element is selected.
Our eyeball-in-a-mouth icons brings some weirdness and whimsy to the relatively basic interface. They play off the idea of turning your speech into something visual.
Creating an app icon for tvOS is a lot of fun because of Apple’s parallax layer effect. This effect creates a feeling of depth and tactility as the user focuses on your app icon. The guidelines for creating the app icon files are pretty straightforward, and Apple provides developers with a nifty previewer app so that you can test and tune the app icon design.
Since Miraj’s release in late June, we’ve been enjoying the response. We’ve been breaking out the Apple TV for user testing whenever someone visits the studio. Any debugging session unerringly turns into an hour of passing around the remote, trying to figure out the magic words that will create the coolest Miraj yet (new favorites include “clowns,” “cyndi lauper,” and “mr bean”). The development process got us hooked on Miraj and excited about what else we can do with the tvOS platform.