Always Annexing Pixels

On techniques and infrastructures for cross-device visualization.

Cross-device visualization transcends individual devices into distributed device environments. (Photo by Josh Hild on Unsplash.)

Data visualization lives and dies by screen real estate, or, more accurately, the pixels it can leverage. While typical screen resolutions grow only slowly — we’re up to commodity-level 4k displays now — mobile computing is making leaps and bounds in both increasing the pixel density as well as the number of screens we surround ourselves with. Combining these two facts — the hunger for pixels as well as the proliferation of mobile devices — means that the future of visualization likely involves many separate devices and screens. This kind of cross-device visualization has been the implicit guiding principle for my research for several years now. In this article, I will review my past work on techniques and infrastructures for cross-device visualization, including our most recent Vistrates platform, and discuss my ideas for the future.

Of all the visual channels that we use to convey data in visualization, the space itself is perhaps the most important. For example, cartographer Jacques Bertin (1918–2010) ranked the spatial position of a visual mark as the most powerful visual channel, and this is backed up by empirical results from countless graphical perception experiments. Given the importance of the space itself, it is thus not surprising that visualizations tend to become more effective as they are assigned more space. More space means more marks, more detail, and more labels — or even more visualizations packed into the same space! Edward Tufte talks about maximizing data-to-ink ratio, and it clearly follows that the more space we have, the more ink and thus more data we can display.

But physical space is a finite resource in today’s workplaces. For one thing, typical computer monitors no longer grow in physical size, but only in resolution. Today’s off-the-shelf monitors support up to so-called “4k” resolution — typically 4096 x 2160 pixels — or so-called ultra-HD, and even if this number will grow in the future, there is clearly an effect of diminishing returns as the resolution grows further. Yes, having individual pixels that are smaller than the human eye can see is obviously helpful (for example, printed paper has a very high resolution, or dots per inch), but the payoff for data visualization is nevertheless decreasing.

Instead, a more obvious avenue forward is to increase the physical display space available for data visualization by annexing any available pixels in the typical workspace. Even if monitors are no longer expected to grow in physical size — there is a limit to how big the screen on your desk can become before it gets unwieldy — the same effect can be achieved by utilizing multiple of them. We call this kind of display environment consisting of multiple devices a multi-device environment — simple enough — and we call visualizations that are designed to span multiple devices cross-device visualizations. This topic has been my main research interest for close to a decade.

How can we practically support cross-device visualization? The easiest approach is simply to use dual-monitor setups, which many people already have on their desks. Since such dual monitors are connected to the same desktop computer, this is also a trivial technical solution. In fact, even many information cockpits or display wall environments are built this way: by connecting many monitors (up to perhaps 10) to a single, beefy computer with many high-powered graphics cards to run all of the screens. Standard operating systems such as Windows and MacOS have native support for multiple screens, which makes building applications for this setting simple: it is just one big window split across multiple screens.

However, there is an even more interesting opportunity: increasing display space by taking advantage of spare mobile devices lying around. It is also perhaps a more realistic approach, as the multi-device environments of the future — such as one distributed across an entire room, floor, or even house — are not likely to be powered by a single computer. Now that Moore’s law is no longer on our side, we will simply not be able to build computers strong enough for such a purpose. Instead, the solution lies in many distributed computers, each of them moderately powerful in its own right and capable of running one (or a few) screens, but networked together into the same virtual environment.

Of course, building such distributed multi-device environments requires more advanced technical solutions than merely connecting a bunch of monitors to the same computer. Instead, we must use the network to connect the individual computers together so that they can contribute to the same shared visual space. This requires a network infrastructure for multi-display environments. Since at least 2010, my students and I have been working tirelessly to make this vision a reality. Our first solution from that year was called Munin, and was an infrastructure built in Java to enable many different devices to share the same visualization canvas. We then moved on to web technology, proposing the PolyChrome system in 2014 that enabled multiple web browsers to view the same web page. Finally, last year (in 2018) we proposed Vistrates, a web infrastructure for dynamic, shareable, and malleable cross-device visualization.

With Vistrates, my vision of visualizations effortlessly spanning multiple devices has been realized, and there are many exciting applications from the user perspective. Some of the applications we have so far built include a visualization dashboard that automatically configures itself to available displays (the Vistribute project), a smartwatch that can help the user to better view data on a display wall (the David and Goliath project), and a collaborative insight capture and presentation tool (the InsideInsights project). Since Vistrates is built on open web technologies, the possibilities for leveraging a multitude of existing tools are endless. I can anticipate future applications involving virtual and augmented reality, advanced server-side computational support, and tools for classroom learning, among many other things.

In other words, with multi-device environments built using open web technologies, the sky — not the screen — is the limit.

--

--

Sparks of Innovation: Stories from the HCIL
Sparks of Innovation: Stories from the HCIL

Published in Sparks of Innovation: Stories from the HCIL

Research at the Human-Computer Interaction Laboratory at University of Maryland

Niklas Elmqvist
Niklas Elmqvist

Written by Niklas Elmqvist

Professor in visualization and human-computer interaction at Aarhus University in Aarhus, Denmark.