Introducing Embedding.js, a Library for Data-Driven Environments

Beau Cronin
7 min readJan 12, 2017

Data and its visual presentation have become central to our understanding of the world, and yet so many visualizations prioritize bling over communication. The fear, and it is justified, is that VR will merely exacerbate the problem, unleashing new and nauseating ways to deliver empty visual calories rather than a meaningful increase in articulative power.

Yes, this is lame.

But the truth is, we haven’t come close to exploring the full potential of immersive data-driven environments. In large part, that’s because the hardware systems and software libraries to do so have been very hard to access; only a small handful of researchers have had a chance to work in this space. This lack of access has, in turn, perpetuated our collective ignorance of the true power of embodied, spatialized data environments to generate understanding and insight.

How do we unlock this potential? Beyond the mere visualization of data, we need to place information in spatiotemporal relation and context. This will allow our innate perceptual capabilities — for we are all geniuses at understanding and navigating our surroundings — to operate in ways simply unattainable in traditional, windowed settings.

The key differences with data spatialization are twofold:

  1. We experience a tight sensorimotor loop: our movements, large and small, lead to immediate, corresponding changes in our sensory input — matching the subconscious expectations learned in natural environments.
  2. We are not merely seeing and exploring these environments, but reaching out and manipulating them; we are not ghosts in the data machine, but actors participating in our surroundings.

Embedding.js is an attempt to open the creation of data-driven environments possessing these qualities to the broadest possible audience of developers. Some of its most relevant features:

  • It is pure, plain-old Javascript, built on top of three.js, which in turn provides a powerful abstraction layer over the WebGL API that is implemented in every major browser. Any web developer can get started in no time; 3D programming experience is not necessary.
  • It is responsive, so that any embedding environment will Just Work without modification on the desktop, with touch, and in WebVR.
  • Its compatibility and reach will track that of WebVR as a whole, which means that it will likely work on most major browsers by the end of 2017 (see here to track progress and get more info). Web VR (and therefore Embedding in VR mode) is already supported on Windows with Chrome and Firefox via special builds of those browsers.

Spatiotemporal Embeddings

The key concept and abstraction in the library is — wait for it — the embedding of a dataset in time and space. These embeddings can be static, or they can represent various temporal aspects of the data generation, gathering, and transformation processes. The most immediate example is a scatter plot embedded in three dimensions, in which each point corresponds to an observation in the dataset, and position and other properties are derived from attributes/columns/features of the data.

Other embeddings might similarly draw from existing visualization techniques, including force-directed layouts for graph and network datasets. But new embedding types will emerge and find adoption that are native to immersive environments, whether mimicking real physical objects and settings, or leveraging specific cues in a more deliberate fashion.

Discovering exactly which embedding types prove the most valuable and compelling is the first major goal of this effort.

Why Immersion Matters

What is different about immersive environments, and how can those differences be used to transcend the limits of traditional visualizations in understanding data?

For one thing, it’s not just about 3D — we’ve used various depth cues in windowed visualization settings for some time, and in some cases these techniques have been put to good use. But something altogether different happens when we inhabit an environment, and in particular when our sensory inputs change immediately and predictably in response to our movements. Real-world perception is not static, but active and embodied; the core hypothesis behind embedding is that data-driven environments can deliver greater understanding to the degree to which they leverage the mechanisms of exploration and perception that we use, effortlessly, in going about our daily lives.

That hypothesis suggests a very large space of possible designs, however, and one of the main goals of embedding.js is to provide an accessible, productive platform in which to experiment. What follows are some of the elements that I suspect will be important, but I’m sure that some of these will fail to prove out, while others not listed here will turn out to matter greatly.

Conceptual metaphor

We humans have great capacity for abstract thought, yet we do so with brains that are, in terms of their physiology and functional organization, very similar to other mammals who do not seem to possess these abilities. This is because the human brain leverages neural structures and circuitry that originally evolved to handle spatial and social cognition for other, more flexible aims. Conceptual metaphor is the body of theory that describes how we use isomorphic mappings from concrete to abstract domains to reason about everything from long-term relationships to data structures. These mechanisms are not exceptions or special cases, but are ubiquitous in our thinking. The question for immersive data visualization is therefore not whether to use conceptual metaphors, but which ones to use and how to execute them.

Multisensory cues

Humans are visual creatures, but much of the richness of our everyday experience stems from the fact that our eyes, ears, skin, and other sensory organs provide different kind of information about the environment — and that, in normal situations, those signals are consonant with and reinforce one another. It is very likely that appropriate, spatialized sound design and haptic feedback can be leveraged to significantly increase the level of understanding provided by a data-driven environment, just as VR developers have discovered to be true in other application domains.

Motion parallax

Motion parallax: a little goes a long way.

There are many cues that our sensory system uses to understand the structure of our environment, including binocular perspective, relative size, texture, occlusion, specular highlights, haze, and pre-existing knowledge of the size of various objects. But motion parallax — the relative translation of various elements of the environment across the retina in response to head movement — is a particularly powerful source of information. The simple act of moving your head, even unconsciously, can snap a scene from flatness or incomprehension to vivid structural understanding.

Landmarks and navigational cues

I really feel like I’ve been to Castle Hohenrechberg — but it was just in

Those who have spent some time exploring environments in VR often remark that their memories of these experiences have much more in common with memories of places they have actually visited than those they have read about, or seen on TV. They remember as if they had been there, because properly constructed immersive environments invoke the spatial and navigational structures in our brain. With repeated visits to the same location, we can become intimately familiar with environments and notice subtle changes that occur over time. One possibility is to bring memory palace methodology to its full potential, by creating persistent environments whose contents bears meaning. As we’ve already seen with conceptual metaphor, spatial understanding and familiarity is the primary means by which we draw connections between and synthesize ideas from different domains. The rooting of disparate concepts in a shared frame of reference is a foundation of creative thought.


We know how this ends

From birth, our sensorimotor systems have learned and developed in the presence of consistent physical laws. All of our sensory inputs are generated by environments that are beholden to those laws, including those that govern rigid- and soft-body interactions, fluid flows, gravity, the optical properties of materials, and so on. We are very comfortable in the presence of these forces, and their sudden absence or violation can cause reactions varying from disbelief to discomfort and anxiety. The ability to include these familiar physical interactions in data-driven environments represents a new design and artistic frontier; like any new technical palette, it will take some time for us to discover and then master the techniques itoffers. One early guess: it probably won’t be necessary to reproduce all aspects of real environments in “photorealistic” detail, but rather that a judiciously-chosen subset or adaptation of physical law can enable data-driven environments with greatly enhanced appeal and perceptual benefit.

Now and Next

Embedding.js is ready to play with, though much remains to be done. I’d like to invite the adventurous to jump in and create new environments right now. And I’d be positively thrilled if some of you were moved to help out and contribute — there’s a rough roadmap, but it’s just a sketch.

In my boldest moments, I dream that Embedding will do for data-driven environments what D3 has accomplished for data-driven documents — that it will provide (the technical underpinnings for) a new lingua franca through which certain ideas are shared and arguments engaged in as never before. But it’s also entirely possible that its fate is simply to point toward a better way of doing this — and that would be great as well. Either way, the important thing is that we start learning how to make environments that communicate crucial lessons from data. It is my belief that our future as a species, and as a civilization, depends in large measure on our ability to do just that.

Many thanks to Alex Bowles for thoughtful comments and suggestions.