S02E07: …All the Realities!

Dooley Murphy
Discover Virtual Reality Design
4 min readFeb 16, 2020
An Internet meme. Mann et al. (2018)’s taxonomy/ies even address sociopolitical phenomena, from memes to space law. (See Figure 15 in their paper, linked in the article below.)

Dooley here, penning the companion post for our latest podcast episode.

This time, we touch on questions of perception and sensation vis-à-vis augmented, mixed, mediated, “multimediated”, and presumably also blended realities.

How do we cover so much ground? By structuring discussion around an ambitious taxonomic paper (open-access PDF link) by the “father of wearable computing”, Steve Mann, the “grandfather of virtual reality”, Tom Furness, and their co-authors Yu Yuan, Jay Iorio, and Zixin Wang.

We relate some of their ideas—particularly about real-world sensation or perception as constitutive of blended reality experiences — to our recent XR encounters, with Aki describing his demo of HaptX haptic gloves, one of Darkfield’s immersive physical installations, and Marshmallow Laser Feast’s multisensory VR experience, Ocean of Air.

Tune in now to hear the full conversation, or continue reading for a brief and slightly facetious summary of Mann, Furness, Yuan, Iorio, and Wang (2018)’s bold attempt at unifying ALL the realities under a single, supposedly consistent framework.

In a paper titled “All Reality: Virtual, Augmented, Mixed (X), Mediated (X,Y), and Multimediated Reality”, Mann et al. advance a unifying multidimensional framework for (presumably) classifying and theorizing all forms of mediated virtual experience. If their proposal could be shown to hold theoretical weight, then it would have massive implications not only for how we understand virtuality in relation to physical and phenomenal reality, but also the philosophy of technology and representation.

Their first move is to reiterate Milgram and Koshino’s 1994 reality–virtuality continuum, here renamed the mixed reality continuum. Most readers and listeners will probably have encountered this before. As a simple unidimensional continuum, it posits that at one end of the spectrum lies real, unmediated reality, and at the other end is “fully” immersive virtual reality, in which none of the real world is (theoretically) perceptible. Blending between the two extremes gives us Augmented Reality, Augmented Virtuality, and so on. (Where the latter, presumably, can be exemplified by something like a VR experience featuring real furniture, tracked and mapped into the virtual scene.)

The Reality–Virtuality Continuum, here relabeled the MR continuum. WikiMedia Commons.

The authors’ next major move is to prompt us to “consider either of the following:

  • devices and systems designed to intentionally modify reality;
  • the unintended modification of reality that occurs whenever
    we place any technology between us and our surroundings
    (e.g. video-see-through Augmented Reality).”

The key point here, I think, that lets the authors evade philosophical scrutiny is to emphasize a device’s intended use, or teleology, rather than its existence, or ontology. They contend that in order to better understand, say, an AR welding helmet that both dims the perceived brightness of the wearer’s visual field and overlays information, the reality–virtuality continuum needs a Y-axis that describes a degree of mediation, here termed mediality.

From Mann et al., 2018.

Presumably the authors know that virtual things are necessarily mediated, and therefore that the distinction between “mediality” and virtuality is an artificial one. As mentioned — the point of the framework is to address intended uses of a given tech; not their fundamental essence.

After this point, however, the paper’s logic and argumentation become slightly unrestrained. With extensive reference to Mann’s many inventions (primarily novel visualization technologies), their taxonomic framework begins to devour any- and everything digital (also biological; social), seemingly also able to account for a third, less-well-elaborated dimension that comprises sociopolitical factors, from family units to space law.

From Mann et al., 2018.

Ultimately, I’m not sure I can do the paper justice — it’s definitely worth a skim! If there’s value to be tapped amid its odd claims of having identified “new phenomenal realities” (e.g. the ability to photograph radio waves; an invention of Mann’s which is not, in fact, either teleologically or ontologically dissimilar from pre-digital and even pre-optical technologies such as the magnetic compass), then it’s in drawing our attention to the role of sensory organs and perceptual systems in the whole process of experiencing virtual phenomena, be they graphical overlays or immersive environments.

Section 3.1 addresses “Multisensory Synthetic Synesthesia”, and irrespective of the direction the authors take the discussion, we link these ideas into the podcast by talking about how virtual and/or mediated experiences are almost always already the product of multi-sensory or cross-modal perceptions.

Consider the illusion of vection. VR experiences that place you in a vehicle (or which move you ‘on rails’) evoke kinaesthetic sensations (i.e., your stomach gives you the visceral feeling that you’re moving) even though they may only appeal to the eyes. Similarly, a VR installation in which the gallery floor has been covered in spongy wood chips adds a tactile dimension by augmenting what your feet ‘report’, as well as an olfactory dimension — wood chips smell like forest.

This take-away, though neither as elaborate nor as novel as Mann et al.’s framework, is what I believe designers can best acknowledge in the work.

It will be a long time before consumer VR systems can stimulate “all” our senses simultaneously, so we’d better learn to innovate in terms of perceptual illusions, substituting sensations, and leveraging multi-modal or cross-modal synesthetic experience!

--

--