Holography, Augmented Reality and The Problem With Iron Man

VividQ Ltd
VividQ Blog
Published in
4 min readDec 12, 2018

Mr Stark, I don’t Feel So Good…

The ability to manufacture extremely small, high-resolution displays is essential to the advent of immersive computing, wherein the digital and physical worlds merge across a variety of applications. If you’ve ever seen Iron Man (or, well, pretty much any Sci-fi film made in the last thirty-odd years), you’ll be familiar with one of the staple graphics of the ‘hi-tech’ future: the immersive display.

In the image above, a rather concerned Tony Stark takes note of an extensive array of information presented in an augmented reality display in his helmet — somewhat more rudimentary versions of this technology are currently in use across various military and aerospace applications, like the $400,000 F-35 helmet.

The F-35 helmet, and pretty much all modern Augmented Reality displays, use microdisplays to project a two-dimensional image on an ostensibly clear surface, as above.

Where’s the depth?

The problem with such otherwise industry-changing technologies is that they fail to create real depth — everything in Tony’s helmet is in focus all the time on a flat plane, so he’s limited in the amount of information he can display, as well as the depth contexts in which he can display it.

Think of depth in this context like this: if you have a pair of AR glasses on guiding you along a google-maps like interface as you stroll around some unfamiliar place, you’re really just seeing a two-dimensional image close to your eye, like holding a very small phone screen up to your face. With a true depth display, you’d see the map stretch out into the road, so that it appears to actually integrate with the environment and cling to the world as we see it, in three dimensions.

So, if a missile’s flying towards Tony, his visor will simply show him a two dimensional graphic warning him that he’s about to get blasted to smithereens, rather than the display actually mapping to the environment and signaling the missile depth in three dimensions. Oops.

But because the real-world is, well, actually three dimensional, current AR applications are forced to use a variety of tricks to simulate the illusion of depth. These tricks only fool the psychological perception of depth, and thus your eyes don’t actually interact with such displays as if the graphics they project represented real objects.

As if flying around in a metal suit fighting aliens all day isn’t stomach-wrenching enough, these depth illusions cause a mind/body conflict, which can lead to the colloquial ‘simulator sickness’ — so, sorry Tony, but it looks like nausea, motion-sickness, and fatigue might spoil your flight. Bye bye, New York.

Now, thankfully, it’s unlikely that you’ll ever be in such a precarious situation as our pal Stark — however, with increasingly immersive VR experiences using Near-Eye Microdisplays to generate virtual environments, you’ll almost certainly be able to experience that same exhilaration without any of the, you know, potential death. But, even without that admitted downside, you’ll still be navigating a two-dimensional environment tricking your mind into an illusory and weak sense of depth perception, doling out those same negative simulation sickness related repercussions prone to spoil an otherwise enjoyable evening.

Softwhere?

So then, here’s the inevitable point: increasingly smaller, higher resolution, higher frame rate microdisplays provide the hardware bedrock for the future of computing. Most likely, this consists of immersive applications across both industrial and commercial use cases, like Iron Man’s Helmet-Mounted Display, or VR gaming. Essentially, the future of computing consists of the virtual blending with the real, transforming and reimagining the ways in which we interact with information and technology.

What’s missing is a software which removes limitations of current capabilities in immersive computing, like simulation sickness, and generates real depth with variable focus — to unlock the potential of microdisplays by not tricking your mind into perceiving depth, but actually stimulating the physiological depth cues we experience while interacting with the real world. We have a word for these life-like simulations, and you’ve probably heard it before: Holograms.

Yep, Holograms. Holodeck, Minority-Report, Star Wars Jedi-Council ‘we do not grant you the rank of master’ holograms. Ironically, the main barrier between you and Tony Stark is an appropriate holography-enabling software and not the advent of a fully-functioning pew-pew laser-blasting death trap of an Iron Man suit. Generating real-depth holograms in order to, among other things, do away with simulation sickness and usher in the age of truly immersive computing, requires a combination of increasingly proficient microdisplays and a software suite capable of generating dynamic holograms in real-time using off-the-shelf hardware.

In other words, science-fiction is a lot less fictional than you might think, and the holographic future is just barely round the corner.

VividQ offers the world’s first software framework for holographic display with natural depth perception. Our advanced technology enables commercial applications of diffractive holography, with 3D holograms generated on standard processors.

The company is quickly becoming the software framework of choice for next-generation Head-Mounted and Head-Up Displays, and point-cloud compression. We work with world-leading hardware and embedded systems manufacturers, and 3D content generators.

VividQ is an angel-backed start-up company based in London and Cambridge, UK. Please visit www.vivid-q.com or email info@vivid-q.com for more information.

--

--

VividQ Ltd
VividQ Blog

We created the world’s first software framework for holographic display with natural depth perception. And we like sci-fi.