Phiar
Published in

Phiar

What Is the Real-World Metaverse (AR) Missing?

The most overlooked, missing technology that Augmented Reality (AR) needs in order to achieve its long-anticipated promise

Introduction

“Metaverse” is one of the hottest topics these days, spearheaded by Meta Platforms Inc. (then Facebook) riding on the successes of their Oculus virtual reality (VR) headset. While the Metaverse is intimately related to VR, there is also the Real-World version of Metaverse that has been generating quite a bit of excitement recently. In this blogpost, we will discuss the importance of the Real-World Metaverse, a critical technology that it needs but has been largely overlooked, and how automotive is an optimal, first platform to set the stage for the Real-World Metaverse.

What is the Real-World Metaverse (AR)

Before we get into the “Real-World” Metaverse, let’s do a quick review on the metaverse itself. The Metaverse is thought of as a comprehensive VR world where users would be able to do just about anything virtually — live, interact, create, entertain, work, and more, much like the virtual entertainment universe OASIS from the movie “Ready Player One.”

A virtual universe like the Metaverse can mean unlimited possibilities and creations that are bound only by our imaginations, and the concept of the Metaverse has generally been very well received. However, it is also limited in that the Metaverse, being a “virtual” universe by definition, will be completely separated from the real world that we physically live in, which means that beautiful convertible sports car which you ride in the Metaverse and bought with 100K USDs, only works when you’re in the Metaverse, and can’t take it out for a spin on a real coastline by the beach.

The Oasis from Ready Player One. Image source: Geek Girl Authority.

Perhaps you would decide to just spend most of your waking hours in the Metaverse, to build and enjoy a life of virtual abundance there, but it remains to be seen just how much of our real-world experiences can be replicated by the Metaverse.

On the other hand, what if we combine the Metaverse with the physical, real-world that we live in, so that we can mix/augment our physical world with virtual entities and information? Augmented Reality (AR) is exactly about adding virtual information onto our physical surroundings, and recently being called the Real-World Metaverse as the larger scale, more general version of AR.

The Issues with AR Today

AR started to gain major attention a few years ago, with the help of Apple ARKit’s launch in June 2017 that allowed developers to build AR applications for iPhone users. ARKit was a pioneering SDK for AR, and really helped the world to get a glimpse of what would be possible with AR. However, ARKit also revealed many limitations and challenges of current AR technologies and use cases, as most AR applications were developed into gimmicky apps — placing a dancing Jedi onto your desk, which is fun for a few hours and then you’ll never return to that app again. With most of the current AR apps being nice-to-haves but not must-haves, how are we going to further expand such small-scale AR into a world-scale AR, the Real-World Metaverse?

An AR boxer on your desk. Pretty neat, then what? Image source: Starmark.com.

To think about expanding AR gimmicks into more useful AR applications addressing real-world pain points, the main culprit is in the various AR SDKs and development tools themselves — they lack important technologies that allow developers to build more powerful and meaningful applications.

Most of the AR software technologies today focus on surface detection (for placing that dancing Jedi), visual-inertial odometry (to estimate how the user is moving around the environment), and depth estimation (to determine when to occlude some parts of the AR elements). While these are very important foundations in making AR to even work at all, what’s critically lacking is for the AR system to have a sense of its surrounding environment: an AI that can provide perception capabilities!

To Augment the World, First We Need to Understand It

The main difference between a virtual reality world of the Metaverse and a Real-World Metaverse is that in VR, everything is virtual and simulated, there is no need for the VR system to know what is what and where around the user other than the safe movement boundary; however, a Real-World Metaverse wouldn’t be very useful if all it can do is to detect the surface of your table, floor, and the walls without any other context or information.

A useful AR system would need powerful perception capabilities to allow it to detect that there is a door three feet away from you, the door is red and there is a digital keypad on it, so that proper information can be augmented to the door using a color that contrasts well against red, and highlighting the keypad with AR elements to help the user understand how to operate it; or if you are outdoors, the perception AI would allow the AR system to know where is the extent of the sidewalk you are on, that there is a Starbucks coming up in 50 feet with special deals, and that there are buses driving by you that you may take to where you would need to go.

AR overlays on various items in a grocery store. Image source: Hyper-Reality.

All the above examples would require the AR system to be aware of what is what and where, in order for the AR system to know what and where to augment meaningful information. Hence, to “augment” the world first we need to “understand” it, meaning the Real-World Metaverse/AR requires a powerful perception AI as an integral part to be meaningful and useful. Some examples of AI perception technologies needed may include object detection and tracking, semantic segmentation, 3D reconstruction, gesture recognition, depth estimation, and more.

AI perception technologies are in fact also equally important for other none-AR verticals. For example, self-driving cars also need those same AI perception capabilities to operate the vehicle for us. However, a self-driving car typically has many sensors and compute servers to power its AI needs, while AR devices such as smartphones and glasses usually sport just a few camera sensors with a much weaker mobile-grade processor. Therefore, how to make AI perception run extremely efficiently yet accurately on mobile-grade devices is also an imperative component to be resolved before the Real-World Metaverse can become a reality. For this part of the discussion, feel free to refer to this other blog post series on Efficient AI.

Scene understanding AI, showing panoptic segmentation (top-right), depth estimation (bottom-left), and reconstructed 3D points (bottom-right). Image source: Google AI.

Where is the Real-World Metaverse?

In addition to the software side of technologies that still needed to be developed, there are also many hardware limitations and challenges that are hampering the adoption and applications of AR — smartphones are just not ideal for AR as the screens are too small, and the users typically need to use both hands to hold the phone for AR which severely limits how much interaction the user can have with the AR content; AR glasses are the better delivery devices but currently they are still too chunky and need to be tethered to an external battery pack and/or compute device, not to mention the field-of-view (FOV) of all the AR glasses out there are still very small, which greatly degrades the user experience. While this list goes on and on, fortunately there are megacorps (ie Meta, Apple, Google, Amazon, Microsoft) already investing heavily into resolving these hardware obstacles, many of them have also publicly announced their plan and timeline in releasing their next-gen AR glasses. While highly promising, these developments are still many years away from delivering a set of AR glasses that can deliver the needed user experience.

Meta (Facebook) announced that they are actively working on AR glasses at their 2018 F8 conference. Image source: CNET.

Automobiles, on the other hand, are an excellent platform for delivering AR content to the driver and the passengers with a very good user experience. On the hardware side, most of the cars today come with large infotainment displays, which is a great screen for AR and even hands-free, not to mention that many car makers already have Heads Up Displays (HUD) available as an option, and which can further improve AR user experience for the drivers. Also, whether AR is displayed on a HUD or an infotainment screen, the driver’s eyes are never off the road because AR is showing live camera feed in real-time, unlike current 2D maps that are cartoon graphics and completely take the driver’s eyes off the road.

Furthermore, many additional AR use cases can be built on top of the navigation use case, to help the drivers to operate their vehicles safer such as lane highlighting, as well as useful passenger experiences for example highlighting nearby point-of-interests and businesses and special deals. Such enhanced in-cockpit AR experiences will be even more important when a car is driving itself, to connect the drivers and the passengers to their surrounding environment. Therefore, the automotive in-cockpit experience is where the Real-World Metaverse is especially well suited for its initial deployment.

AR pick-up example use case, on an infotainment screen. Image source: Phiar.

In order to help make the Real-World Metaverse a reality inside all vehicles and revolutionize how people navigate, interact and experience the world, the company that I helped co-founded, Phiar, has been hard at work to build a transformational automotive AR navigation and software platform that’s powered by our efficient AI perception technologies, that we call the Spatial-AI Engine.

Spatial-AI Engine running on a tablet in real-time, using a single camera. Image source: Phiar YouTube.

The whole system runs on existing mobile-grade automotive compute chipsets and requires just a single 720p front-facing camera and other common vehicle sensors. The Spatial-AI Engine is our take on the missing AI piece of AR that gives AR visual perception so more applications can finally be developed. Follow us for more developments and updates on how we continue to build the most efficient and accurate AI perception, to enable the Real-World Metaverse starting with using the vehicles!

--

--

--

Artificial intelligence to guide you anywhere.

Recommended from Medium

DeFi11 Weekly Review: 23rd August – 29th August

I found something cooler than Zoom..

Is our Society too dependant on Technology — 7th Grade Speech — School Project

Smart London, smart UK

Why Austin Lags in Race to a 5G Future

[Tech Memorial] PocketCHIP — The Maker’s GameBoy

Spotify vs. Apple iPhone Music

The Internet of Things: a consumerist perspective

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Chen-Ping Yu

Chen-Ping Yu

Founder, CTO at Phiar. An entrepreneurial technologist.

More from Medium

Pick & place in VR: Introduction to snap zones

Create an AR Sneaker with Blender, Procreate and Lens Studio

XR Headsets to Look Forward to in 2022

6 Retail Brands Using Augmented Reality Really Well