Get the most of Analog Camera in the automotive context

A light and optimized de-interlacing algorithm

Emmanuel Berthier
grouperenault
Published in
3 min readAug 28, 2020

--

Analog camera are still used on low/mid-end vehicles due to 2 great qualities:

  • They are cheap
  • The signal can be transmitted thru low cost & robust cable (twisted pair)

But they come with a main drawback, the poor image quality inherent to NTSC (National Television System Committee), an analog television color system that was introduced in North America in 1954:

  • First, the resolution is limited (640x480 max)
  • Second, the frames are interleaved

The last one is the worst one: converting interleaved half-frames into progressive full-frames is not transparent and produces interlacing artifacts:

This problem is well known and has been addressed by many algorithms for decades.

On PC world, it exists many Video Toolkits, like FFmpeg, Vlc or GStreamer, that propose such algorithms. Some are light and basic, some are complex and CPU hungry, and only few of them are optimized for embedded systems.

Automotive context

In automotive market, the de-interlacing is historically handled by the NTSC Decoder chip, but its processing power is too low to implement complex algorithm.

@Renault Software Factory, we imagined to move the processing from the Decoder to the “In Vehicle Infotainment System” (IVI), aka the Central Panel, equipped with a much bigger processor.

After many experiments, we concluded that none of the open source algorithms proposes the quality trade-offs required by automotive parking cameras at the CPU budget we had. Moreover, the code is optimized for x86 processor usually found on PCs, but not for ARM processor used on embedded systems. So, we decided to develop our own one.

The target Use Case is the Rear View Camera. This camera has a wide view angle (148°) and is used during parking maneuvers to display the rear scene. In this situation, camera & objects movements are quite slow.

We remarked that image precision is less perceptible on moving areas. We focused on an algorithm that keeps the best resolution on fixed areas, and interpolates pixels on moving areas.

And we also kept it simple enough to be accelerated using ARM Neon vector instructions.

Results

Here is one sample video demonstrating the result:

If we compare to basic algo such as “Scale Bob”, the perceived quality is much better (look at car wheels, landmarks and other diagonal lines):

Concerning CPU, it consumes around 2% of one core at 2.4Ghz for de-interlacing a NTSC stream. This is low enough to be embedded in low-end IVI, and we can even use it for processing 4 cameras in parallel to build a Surround View.

We propose the code today as an Open Source library, under the Mozilla Public License, in order to get feedback and improvement suggestions, and maybe an integration as plug-ins in some SDK.

If you are interested in, please have a look at:

Useful links:

https://en.wikipedia.org/wiki/Deinterlacing
https://wiki.videolan.org/Deinterlacing/#Appendix:_Technical_summary

--

--