Enhanced Dynamic Range
In this article I’d like to briefly describe the work that I did at OmniVision.
This patent describes the system:
HDR quarter resolution image captured using a sensor with variable pixel exposure was processed through HDR ISP pipeline and upscaled to full resolution and further enhanced by overlaying high frequency components from a full resolution image sequentially captured using same sensor configured to linear mode. The HDR image was merged on the sensor, tone-mapped in ISP and aligned and fused with non-HDR (or LDR) image using on GPU.
Image sensor has a Bayer pattern and a variable exposure pattern. Each of 4 blocks of 2x2 size inside a larger 4x4 block were assigned different integration times and combined using exposure merge algorithm on sensor.
Step 1 of the diagram above shows acquisition process of HDR image with using sensor with 4 spatially varying exposures. After merging the HDR image was processed and tone-mapped in ISP and saved to memory.
Since in merging stage 4 blocks of 2x2 blocks ere combined into single 2x2 block a lot of spatial resolution was lost and aliasing appeared.
To restore the resolution of HDR image computational photography approach was introduced in step 2, where the sensor pixels were integrated with same exposure time and full resolution image was processed through ISP and also saved to memory for further fusion with HDR image.
Fusion method calculates high frequency component of the LDR full resolution image and rendering it above the up-scaled LDR image.
The exposure merge was done on the sensor chip and fusion was done on GPU.
It is also possible to do exposure merge on ISP chip, but it will increase data through-put between sensor and ISP by 4x
The system showed quite amazing results over-performing iphone and pixel phones of same generations on both dynamic range and detail reproduction also featuring no ghost artifacts even if fast moving metronome was placed against right window.
I have presented the system demo at poster and demo sessions at EI2017 and CES2017.
Electronic Imaging comity put placed the invention on the same page as Boyd Fowler’s and Brian Cabral’s talks!
The concept was developed at with OmniVision Singapore team in collaboration with A research institute and involved cross-team collaboration and efforts from teams in US and China.
Alignment of the tone-mapped and non-tone-mapped image was done on GPU and was developed by OmniVision team in Shanghai:
Speed of ransac was reasonably increase by adding constraints on the set of control points and removing outliers based on user experience and memory limitations. The method is described in the following invention:
Management of ghosting artifacts was greatly improved by OVT US team.