Optical See Through vs Video See Through on ThirdEye’s X1 Smart Glasses

ThirdEye Gen
3 min readSep 12, 2018

--

If you’re interested in augmented reality, you’ve probably read about the two main methods to display AR content on Smart Glasses: optical see-through and video see-through.

Video-see through systems present video feeds from cameras inside head-mounted devices. This is the standard method that phones for example use AR with. This can be useful when you need to experience something remotely: a robot which you send to fix a leak inside a chemical plant; a vacation destination that you’re thinking about. This is also useful when using an image enhancement system: a thermal imagery, night-vision devices, etc.

Optical see-through systems combine computer-generated imagery with “through the glasses” image of the real world, usually through a slanted semi-transparent mirror. If you are in a mission-critical application and you’re concerned what happens should your power fail, an optical see-through solution will allow you to see something in that extreme situation. If you are concerned about the utmost image quality, portable cameras and fully-immersive head-mounted display can’t match the “direct view” experience.

Another issue to consider the optics blocking the light itself. For example, if an LCOS optics blocks out 40% of outside light then optical see-through would seem as if you are viewing the real world through sunglasses. Once again this comes down to user preference- would you prefer the camera view or to view the real world as is but with a slight camera tint? Every optics in smart glasses blocks out a different amount of light & some more than others so it ultimately comes down to preference.

One big advantage for video see through is that any AR objects mapped to real world objects are accurate because the processing happens on the same pipeline. For optical see through, a manual translation is needed so that the AR objects are mapped to the real world.

How to make your app optical see-through:

Step 1: Do not render the camera view. If you are testing on a phone, the screen background will look black since the actual camera view is not being rendered. If testing on the X1, you will just see a slight dark shade tint for the light blockage.

Step 2: Render your AR content.

Step 3: (Only if your AR content is mapped to a real world object) Adjust the relevant AR data so that it matches up with the real world content.

For video see through apps, no manual adjustment is needed as the standard camera feed is used.

One aspect of video see-through systems is that it’s much easier to match the video latency with the computer graphics latency. Latency (delay) is inherent to immersive imaging systems: motion trackers are not instantaneous; computer graphic generation is not immediate and even when refreshing images at 60, 70, even 120 Hz, there is a lag from sensing to imaging. When computer graphics need to be overlaid on the image from the actual world, there is a difference between video see-through and optical see-through. Optical see through offers no latency, which sounds great, except that there is an inherent mismatch or lack of synchronization between what you see through the glasses and the graphics. If you’re showing a virtual sofa inside a real living room, this mismatch can be distracting. In contrast, using video see-through allows you to synchronize the delay so that your video and graphics are always in sync.

Originally published at ThirdEye.

--

--