Mixed Reality is our Digital Future

Louis Rosenberg, PhD
Predict
Published in
7 min readJan 23, 2023
Dinosaur shown in Mixed Reality viewed by kids. (Rosenberg / Midjourney)
Mixed Reality is the Future of Computing (Rosenberg / Midjourney)

For many people, the word “metaverse” has been tainted with negative connotations, conjuring images of cartoonish worlds filled with creepy avatars or opportunistic platforms selling “virtual real estate.” For others, the word inspires a mild shrug followed by questions like — “Why would I want to spend time doing that?”

Personally, I believe it’s way too early to write-off the metaverse but we have work to do. The industry needs to deploy experiences that are more realistic, more artistic, and unleash genuine creativity and productivity. We also need to correct the biggest misconception about the metaverse: the flawed notion that virtual worlds will replace our physical lives.

This is not how the metaverse will unfold.

Don’t get me wrong, there will be popular virtual worlds that are fully simulated, but these will be temporary “escapes” we sink into for a few hours at a time, similar to how we watch movies or play video games today. But the real metaverse, the one that deeply impacts our daily life, will not remove us from our physical surroundings.

Instead, the metaverse will be a mixed reality in which immersive virtual content is seamlessly combined with the physical world, expanding and embellishing our lives. But don’t take my word for it — the industry is rushing in this direction. Meta and HTC both unveiled impressive new headsets in recent months that enable quality mixed reality. And it’s widely reported that Apple will unveil a mixed reality product in June.

Why Mixed Reality?

Simple — we humans don’t like being cut off from our surroundings. Sure, you can give someone a demo in a fully simulated world, and they’ll love it. Ask that same person to spend an hour in full VR and they may start to feel uneasy. Approach two hours, and for many people (myself included) it’s too much. Some people think the reason mixed reality helps is because it satisfies a desire to glance at things in your physical space — your desk, your phone, your dog — but it’s actually far deeper than that.

It has to do with how our brains work — we build mental models of our surroundings that involve not just what we see around us, but what we hear and feel and sense in our world, combined with memories of what we know is there. It gives us situational awareness so we don’t have to keep looking over our shoulder or scanning the room. We can focus on what’s in front of us but still feel present within a single coherent space.

But when you are in a fully virtual world, sitting at a desk or standing in a room, there will almost always be a mismatch between the room you know is around you with its messy desk and dog barking down the hall, and the virtual world you see in front of you. As a result, you feel uneasy.

This phenomenon first struck me back in 1991 when I was a VR researcher at Stanford and NASA, studying how to optimize depth perception in early vision systems. Back then, the technology was crude and uncomfortable, with clunky graphics and lag so bad it could make you feel sick. Because of this, many researchers believed we just needed better hardware, and people wouldn’t feel uncomfortable in virtual worlds. I didn’t quite agree.

Certainly, I knew that better hardware would help, but I was pretty sure that something else was going on, at least for me personally — a tension in my brain between the virtual world I could see and the real world I could sense (and feel) around me. I believed it was this perceptual conflict between two opposing mental models that made me feel uneasy and made the virtual world seem less authentic than it should.

To combat this situational mismatch, I wanted to take the power of VR and combine it with my real surroundings, creating a unified experience in which my visual, spatial and physical senses were perfectly aligned. My hope was that the mental tension would go away if I could interact with the real and virtual as if they inhabited the same perceptual reality.

By a stroke of good luck, I was funded by the U.S. Air Force to develop a prototype mixed reality system at Wright Patterson Air Force Base in 1992. It was called the Virtual Fixtures platform, and it didn’t just support sight and sound, but touch and feel (with 3D haptics), adding virtual objects to the physical world that felt so real they helped users perform manual tasks with greater speed and dexterity. The hope was that one day this could support a wide range of activities, from assisting surgeons during delicate procedures to helping technicians repair satellites via telerobotic control.

Of course, that first MR prototype didn’t support surgery or satellite repair. It was built to simply test if virtual objects could be added to real-world tasks and enhance human performance. To measure this, I used a simple task of moving metal pegs between holes on a large pegboard. I wrote software to create a variety of virtual fixtures that could help users perform the task. The fixtures ranged from virtual surfaces and cones to simulated tracks you could slide the peg along (haptics) while early passthrough cameras aligned the activity in 3D. I even used early 3D audio developed at the U.S. Air Force (AAMRL) to ensure sounds were also spatially aligned. And it worked, enabling greater speed and precision.

Virtual Fixtures project (1991–1994) at Air Force Research Laboratory — first Mixed Reality (MR) experiments in augmented real / virtual 3D space.
Virtual Fixtures project (USAF): user and task board / side by side 1993

I give this background because of the impact it had on me. I can still remember the first time I moved a real peg towards a real hole and a virtual surface automatically turned on. Although simulated, it felt genuine, allowing me to trace along its contour. At that moment, the real world and the virtual world became one reality, a unified mixed reality in which the physical and digital became a single perceptual experience that satisfied all your spatial senses — visual, audio, proprioception, kinesthesia, and haptics. When that was achieved, you stopped thinking about which part was physical and which was simulated — it was just reality.

That was the first time I experienced a true mixed reality. It may have been the first time anyone had. I say that because once you interact with the real and virtual combined into a single immersive experience, all your senses spatially aligned, the two worlds snap together in your mind. It’s almost like one of those visual illusions where there’s a hidden face you can’t see, and then something clicks, and it appears. That’s how a true mixed reality experience should be: a seamless merger of the real and the virtual that’s so natural and authentic that you immediately realize our digital future will not be real or virtual, it will be both — one world, one reality.

The Technology of Mixed Reality 30 years apart (1992 to 2022)

As I look ahead, I’m very impressed by how far the industry has come, particularly in the last few years. The image above (on left) shows me in 1992 at Wright Patterson Air Force Base developing mixed reality (MR / AR). The image on the right shows me in 2022, wearing a Meta Quest Pro headset with color mixed reality capabilities. Over the 30 year span during which my hair went gray, the technology has improved by staggering amounts — in performance, efficiency, and size.

What’s not apparent in the picture above are the numerous full-sized computers that were running to conduct my USAF experiments thirty years ago, or the cameras mounted on the ceiling, or the huge wire harness draped behind me with cables routed to various machines. That’s what makes this new wave of modern headsets so impressive. Everything is self-contained — the computer, the cameras, the display, the trackers. And it’s all comfortable, lightweight, and battery-powered. It’s remarkable.

And it’s just getting started. The invention of mixed reality is an ongoing process, with amazing new products poised to take off. And it’s not just the impressive new headsets from Meta, HTC, and (potentially) Apple that will propel this vision forward, but light weight eyewear and creative software tools from companies like Magic Leap, Snap, Microsoft, Google, Lenovo, Unreal, Unity and many other major players.

At the same time, countless developers are pushing the limits of creativity and artistry, unlocking what’s possible when you mix the real and virtual, from new types of board games (Tilt Five) and powerful medical uses (Mediview XR), to remarkable outdoor experiences from Niantic Labs.

This is why I am confident that the metaverse, the true metaverse, will be an amalgamation of the real and the virtual, so seamlessly combined that users will cease to think about which elements are physical and which are digital. We will simply go about our daily lives and engage a single reality that is magically embellished. It’s been a long time in the making, but 2023 will be the year that our mixed reality future really starts to take shape.

Louis Rosenberg, PhD is a pioneer of virtual and augmented reality. His work began over 30 years ago in labs at Stanford and NASA. He then developed the Virtual Fixtures system at Air Force Research Laboratory. In 1993 he founded the early VR company Immersion Corp. In 2004 he founded the early AR company Outland Research. He earned his PhD from Stanford, was a tenured professor at California State University, and is currently the CEO of Unanimous AI, the Chief Scientist of the Responsible Metaverse Alliance, and Global Technology advisor to XRSI. He has been awarded over 300 patents for his work in VR, AR, and AI.

( this article first appeared in VentureBeat )

--

--

Louis Rosenberg, PhD
Predict
Writer for

Computer Scientist and Author. Founder of Unanimous AI. Founder of Immersion Corp. Founder of Outland Research. PhD Stanford. Over 300 patents for VR, AR, AI