“The goal is not to build a model of the world in a computer — it is to build it in the mind of the user” — Mark Bolas
The neuroscientist David Eagleman has been working to popularize the term umwelt to signify the world as it is experienced by a particular organism. His quest is about to take off, since consumer virtual and augmented reality systems are perfectly positioned to radically expand the human umwelt. By connecting the headsets that are coming to market to constellations of physical and virtual sensors, VR and AR will allow us to directly perceive and understand realms that are too abstract, distributed, fast, or slow to comprehend with the unaided mind. Sensory augmentation will migrate from the neuroscience lab and far-out new media experiments to our daily lives.
As individuals and as a civilization, we can only make good decisions when we have the right information to consider and act upon. This admitted truism lies at the very center of the big data revolution. But this revolution is far from complete, and we extract only a minuscule fraction of the available wisdom from the data that is collected today. The value we do glean is eked from torturous processing through many obscuring layers of abstraction. What if instead we could perceive those signals more directly, in a form that respected their original structure, and then act on them with the same ease that we navigate our natural surroundings?
Our senses are not fixed, either in kind or acuity; we have been building tools to improve them for hundreds — thousands — of years. Now, we are entering a period when the possibilities for extending our direct sensory experience into previously-inaccessible realms is much greater than ever before. Our mind’s core perceptual machinery can make sense of these domains without the high-effort cognition that characterizes our abstract thinking and decision-making.
While this expansion of our collective sensory frontier has been a major driver of human progress for some time, there are a number of technological developments that are converging right now: cheap sensors, big data storage and processing, on-demand cloud computing, high-quality 3D rendering and sound reproduction, micromechanical systems, new materials, and high-quality consumer augmented and virtual reality systems. Not to mention the collaboration and viral distribution of the social web, which accelerates all of these developments and brings their results to the widest possible audience.
Together, these technologies provide a new and amazingly powerful toolkit to extend our direct perceptual field into domains that were previously accessible only through obscuring layers of abstraction and under high cognitive load. We simply do not grasp the essential workings of cities, economies, ecosystems, social networks, cell biology, the climate, and many other systems whose healthy function is essential to our happiness and survival. This timing is fortunate: we cannot make wise decisions if we cannot understand these systems; we cannot truly understand them if we cannot directly perceive them; and we cannot achieve this kind of perception if we do not have the right tools.
How will we go about this? There are many details to fill in, and few easy answers; I expect to be working on this in various ways for many years to come. But some high level thoughts.
1. This will be an inherently interdisciplinary endeavor, and it will need contributions from designers, data engineers and visualizers, brain scientists, 3D programmers, mechanical engineers, materials scientists, artists, storytellers, and others. All of these schools of knowledge will need to be leveraged to provide new kinds of perceptual experiences to as many people as possible.
2. We will move faster and go down fewer blind alleys if we gain a nuanced understanding of the relevant mental processes. At a high level, our perceptual system is a machine for detecting and interpreting spatiotemporal structure, so we need to build systems that are good at producing such structures. Time is every bit as important as space in this effort, since we understand the world by taking actions and making observations in which changes unfold in sequence. Stepping back even further, I believe that the creation of immersive environments provides a new and fundamental motivation for studying the brain, one whose effects on the brain sciences will be profound.
“Perception is not something that happens to us, or in us. It is something we do…Vision is a mode of exploration of the environment drawing on implicit understanding of sensorimotor regularities…Vision is touch-like. Like touch, vision is active. You perceive the scene not all at once, in a flash. You move your eyes around the scene the way you move your hands about the bottle. As in touch, the content of visual experience is not given all at once. We gain content by looking around just as we gain tactile content by moving our hands.”
— Alva Noe, Action in Perception
3. Our perception is embodied and active. The essential property of any sense — natural, augmented, or synthetic — is a coupling between the contents of the signal and the movement or action of the organism. We see by exploring a scene, moving our eyes, heads, and bodies. This is exactly why VR and AR are such powerful platforms in this context: they are built from the ground up to respect and reproduce this coupling. The most successful perceptual extensions are likely to be those that make the best use of this linkage, since our entire evolutionary and developmental histories as organisms create this expectation.
There are many ways to go about this work — through art projects and conceptual pieces, with tech demos, and via academic research. We Have the Technology is a tour of much of the work that has been underway for years. My hope is that the technologies that are just now becoming widely accessible will enable a true explosion of these efforts, and that we’ll start to see some highly practical applications that move sensory augmentation from the fringe to the merely bleeding edge. Let’s do this!