Synchronized 3-camera array recording itself in the dark

Synchronize multiple cameras to capture at the same time

  • image stitching for panoramic photo/video, or image blending for example: HDR or low light imagery
  • stereovision for 3D reconstruction and depth sensing applications
  • multi-camera tracking for example: eye tracking with one camera dedicated to each eye

Why is synchronization so important?

How to synchronize cameras?

Building a test setup

inastitch prototype with “flat” camera setup

Does it really work well?

Three synchronized cameras recording the same stop watch
Screenshot of the timing experiment
  • The computer is rendering the stopwatch at 60fps, therefore the video output only refreshes every ~16ms. In other words, it cannot update every millisecond.
  • The screen pixels themselves need a few milliseconds to change from black to white.

What is the output latency?

Conclusion

Going further

How to reduce delay?

How to reduce latency?

  • [camera board] capture frame
  • [camera board] encode frame
  • [Ethernet] network delivery
  • [stitcher board] frame alignment
  • [stitcher board] frame decoding
  • [stitcher board] stitched frame rendering
  • [display] display
  • Using gigabit Ethernet, a frame would be delivered faster.
  • Using AVB Ethernet, a frame could be delivered just in the right time for rendering. No wait in a buffer for the next render loop and no frame alignment needed.
  • Using hardware decoding and hardware copy, a frame would be available in the GPU memory without any CPU copy.
  • Using a high framerate display (e.g., 144Hz and more), rendered output would be pushed with less delay to the next screen refresh.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store