In Search for Depth Part I — Smartphone Depth Sensing
Someone asked me why do devices need depth sensing technology? Remember that scene in Terminator 2, where Arnie scans a bunch of people for their clothing sizes and says “I need your clothes, your boots and your motorcycle”. The T-800 needed depth sensing to accurately determine the sizes of the clothes that those people were wearing! That’s probably not a good example, but other use cases which you might find useful include changing the background of your selfie photo, creating a soft, out-of-focus blur in your smartphone photos typical of professional cameras, or guiding you through an augmented version of maps overlaid on the real world. Beyond the smartphone, depth sensing also has applications in automotive, robotics and many other industries. However, existing technologies have limitations when it comes to cost, size, computational overhead and manufacturing complexity. At CRCM Ventures, we set out in search to find the perfect depth sensing solution for the smartphone.
Today’s depth sensing technologies include stereo camera techniques that simulate a human’s eyes. These techniques recover depth information by comparing multiple, simultaneously acquired images and require two or more camera sensors. This solution is expensive, requires multiple components and has limitations set by the distance between the two camera sensors.
Another approach is to use infra-red sensors that rely on time of flight or structured light. These techniques actively illuminate a scene with either pulsed or patterned infra-red light and then measure depth by determining the full return trip time of the infra-red light or subtle changes in the illuminated light pattern. Again, this technique requires multiple components, which increases power consumption, cost as well as overall device footprint.
Arguably the ideal depth sensing solution would be a passive, single-sensor system, which in contrast, directly captures all the light information (x, y, and z and two angles θ and ϕ) without an increase in computational complexity or manufacturing burden. I’m excited to announce that we have found such a technology.
Introducing the Airy3D Transmissive Diffraction Mask (TDM). The simplest way to explain this is a layer of thin transparent material that kind of looks like the top of a castle wall that is placed over a standard camera sensor. When light hits the material, it diffracts or changes direction, and then hits the sensor. This phenomenon reveals the phase and direction of light, and thereby directly measures depth on a real-time basis.
AIRY3D provides a solution that is a passive, single-sensor, with negligible profile and low computational load, and drops into traditional manufacturing processes. The team behind Airy3D includes 3 PhDs in materials science, optics and computer vision with combined 30+ years of experience in hardware and software. CRCM Ventures is proud to announce our investment in Airy3D alongside some tremendous co-investors such as R7 Ventures, WI Harper, Bosch Ventures and Nautilus Venture Partners.
Airy3D’s solution can be placed in any device with an image sensor including cameras, webcams, Spectacles and most importantly smartphones. Look out for Airy3D’s TDM solution in devices in 2018.
Full press release available here. Stay Tuned for In Search for Depth Part II — Outdoor Depth Sensing