Indirect Time-of-Flight Depth Camera Systems Design

Refael Whyte
Chronoptics Time-of-Flight
10 min readMay 4, 2021

Indirect Time-of-Flight cameras are made up of many different components, their selection and integration are critical to deliver a time-of-flight module the meets the required specification. In this blog post we explore the critical design choices and how they affect the module performance.

About Chronoptics

Chronoptics designs indirect Time-of-Flight depth sensing modules, focusing on producing the best point cloud for a given application. For advice on designing and optimizing your Time-of-Flight camera reach out to, hello@chronoptics.com

Systems Introduction

The below figure goes into detail about all the different components of designing an indirect time-of-flight depth sensor.

Exploded diagram of the different component and design constraints for a time-of-flight depth camera

We talk about the decisions that go into each component.

Image Sensor Selection

Time-of-Flight image sensors are available from numerous vendors, including Sony, Infineon, Melexis, Analog Devices, EPC Photonics, Samsung, and Artilux. The selection criteria depends on

  • Price
  • Power consumption
  • Image sensor resolution (number of pixels and aspect ratio)
  • Pixel Size
  • Quantum efficiency at illumination wavelength
  • ADC resolution
  • Demodulation contrast
  • Pixel full well depth
  • Image sensor package, readout speed, and readout interface

The constraints placed on the selection criteria depends on the 3D application constraints. In applications that need sunlight robustness the full well depth is important, in battery powered elections the power consumption and efficient capture and use of photons is paramount.

The Melexis MLX75027 Time-of-Flight image sensor assembled onto a PCB. The BGA package and cover tape (removed) makes PCB-Assembly straight forward.

The table below compares two image sensors the MLX75027 and EPC635, both of which have publicly available datasheets.

The MLX75027 has 32 times more pixels than the EPC635, but that comes at a higher price. The application of the depth data dictates the image sensor resolution required.

The pixel size, demodulation contrast and quantum efficiency are all metrics relating to the efficiency of capture of reflected photons. The bigger the pixel active area the bigger the surface area that incoming photons can be collected over. The pixel’s active area is the fill factor multiplied by its size. Both the MLX7502 and EPC635 are back side illuminated (BSI), meaning 100% fill factor. The quantum efficiency is the ratio of electrons generated over the number of arriving photons. The higher the quantum efficiency the more photons are captured. The demodulation contrast is a measure of the number of captured photons that are used in the depth measurement. For more information on image sensor noise see this blog by Swati. If interested in a further blog on iToF image sensor parameters tell me at r.whyte@chronoptics.com .

The full well depth is the number of electrons the pixel can store before it saturates and no longer store any more. Having a larger full well depth means a larger dynamic range of reflectance’s, so a bright object can be measured at the same time as a dark object without the bright object saturating. A large well depth is one part of outdoor sunlight operation, as ambient sunlight photons also fill up the pixel well.

The sensor modulation frequency affects the distance measurement precision and maximum measurable distance. The higher the modulation frequency the more precise the measurement and the shorter the maximum distance. Phase unwrapping can be used to extend the maximum distance.

The sensor interface and readout speed determine the SoC, or the SoC determines the sensor interface. Most image sensors use MIPI CSI-2, theoretically making interconnectivity between SoC’s easy. However different SoC’s support different lane speeds, metadata, and other features.

One more thing to consider is the image sensor package, as they can be bare die, ball grid array (BGA), or chip scale package (CSP). The different packaging options affect the PCB design and manufacturing. The other packaging consideration is the inclusion of any optical filters, such as in the TI OPT8241 where the package includes a 850nm bandpass filter.

Lens Selection

The criteria lens parameters are

  • Field of View required for the application.
  • F# (or F-number).
  • Wavelength of illumination, to use the best lens coatings to reduce lens ghosting.
  • Image sensor size. The FoV changes with image sensor size, and all the image sensor should receive light.
  • Depth of field.
  • Modulation transfer function (MTF)

The field of view is the angle of light rays accepted. The lens F number is a number representing the size of the aperture (hole) the light passes through. The smaller the number the bigger the hole, so more photons pass through. The disadvantage is the decrease in depth of field, so objects can be out of focus and blurry. The hyper focal distance of the lens hyper focal distance is the point where all objects are in focus. The hyper focal distance normally sets the minimum object distance on ToF camera’s, as closer objects are blurry, these depth measurements can be used for collision avoidance.

The aperture of a camera, the smaller has a bigger hole that lets through more light. As time-of-flight cameras are active illumination we want to collect as many reflected photons as possible, which means we want a big hole and small F number.

The coatings on the lens elements are critical to reduce lens flare (ghosting), as this causes multi-path interference, a source of measurement error in time-of-flight depth cameras. Using the correct coatings for the laser wavelength is required. See my previous blog post for more information on multi-path interference.

Lens flare (ghosting), is caused by inter-reflections between optical elements in the lens. Bright reflections can spill over into dark reflections causing measurement errors.

The dominate noise source in ToF imaging is caused by photon shot noise. The standard deviation of the noise is the square root of the number of arriving photons. Photons that are captured that are not signal photons cause noise, and an optical notch filter is used to reduce the number of non-signal photons. If blocking out sunlight photons the reducing the notch width is important.

A selection of time-of-flight lenses. The blue or green hue is from the anti-reflective coatings for 850nm or 940nm.

If an off the shelf lens doesn’t meet your specification, you can create a custom lens design, which does extend the lead time.

Illumination selection

The illumination consists of the VCSEL, its driver and the optical diffusers to throw the light into the desired parts of the scene.

The light source is a critical part of the time-of-flight depth sensing, as the measurement depends on collecting reflected photons. The key light source variables are,

  • Power budget
  • Laser efficiency
  • Packaging and diffuser FoV

In recent years ToF illumination sources have moved predominantly to VCSEL arrays, as they provide high optical power in an easy-to-use package. A great example of this is the BIDOS VCSELs from OSRAM, they come in a range of power, diffusers angles, with options for 940nm and 850nm options. The ceramic package makes it easy for PCB assembly, no special processes are required.

Close up photo of the BIDOS VCSEL, showing the VCSEL array with the cover diffuser packaged for easy PCB-Assembly.

Another key consideration is how to drive the laser. Ideally want a perfect sine wave, and a square wave is adequate. Only the optical power in the first harmonic contributes to the depth measurement. Switching a couple of amp’s of current on and off at 50MHz is not a trivial design. This has led to companies developing mixed signal VCSEL drivers, such as Opnous, ECP with their Epc9144, and iC-Haus.

If driving the illumination in a method not tested by the vendor then extra reliability testing may be required to verify no early failure.

If using a tight notch filter the wavelength stability with temperature becomes important, as the VCSEL wavelength can change with temperature.

Eye safety

Illumination sources should be designed for IEC 60825–1:2014, specification for eye safety. Eyes are like lenses, they focus the input light onto the back of the eye, this can be like a magnify glass focusing the suns rays to burn paper, the same thing can happen with your eyeball, the light is focused and burns a hole in the back of the eye causing permanent vision damage. The near IR wavelengths are still focused by the eye’s lens, just not detected by the rod’s and cones in the eye. The burning limit of the eyeball limits the maximum optical power per square mm.

The other aspect of eye safety design is having no single point of failure that makes the illumination source non-eye safe. For example, if the diffuser cracks and exposes the laser elements, is it still eye safe? It not the crack needs to be detected and the laser turned off, or two barriers used incase one fails. Indium tin oxide (ITO) can be used as a coating, as it is electrically conductive and optically transparent, the impedance will change if the surface is damaged. Or a photodiode in the laser can be used to detect changes in the back reflection indicating damage. The same considerations around power supplies shorting and other failure modes need to be considered.

While on regulatory approval, FCC and CE approval for EMI is required. Having 10 to 200MHz square waves switching at high current can create large peaks of EMI if not designed correctly.

Mechanical Integration

The mechanical integration is the final form factor of the module, and how it mechanical couples with the rest of the design.

SoC Selection

What the SoC is performing affects its selection, the possible tasks include

  • Depth calculation.
  • Conversion to another protocol, i.e. USB 3.0
  • Edge computing application.
  • Image sensor control

Some specialized chipsets such as the Cypress EZ-USB enable conversion from MIPI-CSI to USB, so if off loading the depth calculation to a host PC and minimize bill of material (BOM) costs makes such a chipset ideal.

Some Time-of-Flight image sensors have all the timing control built in, while others do not. For example, the MLX75027 must be dynamically reconfigured over I2C to support multi-frequency operation. This can be performed by a small micro-controller or be part of a larger SoC.

The depth calculation must occur somewhere, as multiple raw frames from the image sensor must be readout before a depth frame can be computed. Calibration must be applied, any image filtering and extra processing like phase unwrapping. As this is all done per pixel the computational requirements scale with the number of pixels.

With megapixel (1024x1024) from ADI coming soon, and VGA (480x640) available now the computational requirements can be significant, and careful selection of the computational platform is required. A GPU, DSP or FPGA is required, as most of the computation is per-pixel and be run in parallel. The depth pipeline can be computed in floating point or fixed point depending on the requirements.

Chronoptics Kea Camera

Flyer for the Kea camera, with key specifications

The diagram below labels the components selected for the Chronoptics Kea camera.

Exploded view of the Chronoptics Kea time-of-flight camera with the key components ladled.

The flyer for the camera is above, showing the key specifications. VGA resolution was a key specification, and the MLX75027 image sensor met that. Next was the SoC selection, and that was required to perform the depth computation, and connect to the outside world with Ethernet or USB 3.0. It also had to reconfigure the MLX75027 to enable multi-frequency methods, for phase unwrapping and multipath removal. The IMX8 Quad filled this feature requirement, the GPU supports OpenCL so can run the depth pipeline, has 4 lane MIPI-CSI2 receiver for receiving data from the MLX75027. The IMX8 has a real-time M4 core that can interface to the MLX75027 to reconfigure to support multi-frequency. The IMX8 also supports being a USB3.0 device and host, and gigabit ethernet.

The FoV was selected to match the diffuser angle by the BIDOS VCSELs from OSRAM. The lens coatings are designed for 940nm. A notch filter is part of the lens assembly. The notch filter, large pixel full well depth, and VCSELs enables time-of-flight depth measurements with 120klux (a bright sunny day). A second sheet sits on top of the VCSELs for eye safety. Thermally conductive and electrically isolating thermal compound transfers the heat from the VCSEL and VCSEL driver to the front heat sink, with the back heat sink dissipating the heat produced by the IMX8. An optional RGB module is available. The USB 3.0 host enables connection to a variety of peripherals for rapid ToF product development, such as TPU compute sticks as the Google Coral, or prolonged data capture to hard disk. The IMX8 quad core enables light weight edge computing as some CPU cores are available.

Other Cameras

The spider diagram below shows the different constraints placed on different designs. For consumer AR/VR applications the form factor, size, weight, cost, and power consumption are the main drivers of the specification. While for industrial automation the accuracy, precision and robustness in different environments are critical.

Spider diagram of the different design specifications placed on a time-of-flight module design.

Calibration

Each camera module requires calibration, something I’ll discuss more in a future blog post.

Conclusion

This article was meant as an introduction to time-of-flight depth module system design, giving a brief overview of the major component selection and how all these decisions are inter-related. Chronoptics specializes in making these decisions to build modules to deliver the best point cloud for a given application. There are many variables, many which are interconnected, that need to be considered in a time-of-flight module design. Reach out to hello@chronoptics.com to learn more about our services, and how we can help with your Time-of-Flight camera design.

--

--

Refael Whyte
Chronoptics Time-of-Flight

I'm a cofounder of Cambridge Terahertz, we are bringing solid state high frequency radar for security screening to life.