Lighting exposure improvement using Fourier Transform with OpenCV

Elvis Ferreira
6 min readMay 4, 2019

--

This is the third part tutorial on a series that I’ve been doing on some computer vision fundamentals using the library OpenCV. All the tutorials presented in the series were proposed by an undergrad computer vision class and implemented in python by choice.

source

This part of the series focus on introducing the usage of the Fourier transform for images with an application for filtering. To this time I haven’t mentioned anything else but spatial image manipulation, however it’s also possible to manipulate images on the frequency domain through the transformation.

Fourier Transform

The Fourier Transform decomposes a waveform, basically any real world waveform, into sinusoids. That is, the Fourier Transform gives us another way to represent a waveform. For digital images the frequency idea is related to image details, that is, low frequency represents the shapes in general and high frequency more precise details.

Futhermore, the Discrete Fourier Transform (DFT), used for digital data, presents a new image with noise or imperfections that aren’t easily handle or see on the spatial domain. To exemplify, let’s see how the DFT represents the Moiré pattern, a periodic noise on an image. This pattern shown below was a feature on the old times way of printing images.

Moiré pattern example (source: professor)
DFT result (better implementation here)

The DFT result above is a little noisy, but good enough for understanding and usability. So, it’s possible to see symmetric groups of dots in the middle of the grey plan and those dots represent the periodic Moiré noise. Thus, it’s possible to use special filters to remove the dots, which if removed and done the inverse DFT it’s obtained a new and noiseless image.

Homomorphic Filtering

After this brief introduction to the concept, it’s proposed an implementation of the homomorphic filter using the DFT. This, being a way to improve lightning exposure on a picture with much shadows.

Every pixel of an image can be represented by an illuminance and reflectance components. The illuminance i(x,y) represents the amount of light upon the pixel, having slow variations through space. While that the reflectance r(x, y) indicates how much light is reflected. This component is strictly related to the material where the light it being reflected on and can have fast variations.

So, every pixel can be understood as being a mix of these two components:

P(x, y) = i(x, y)r(x, y)

This way, to fix the lighting on a scene it does not only take an increase to the illuminance component, since it is mixed with the reflectance; applying the DFT to the image we wouldn’t get the components separated.

The problem gets harder than this. Without much math, I present a summary to producing the image filtered:

z(x, y) = ln(i(x, y)r(x, y)) = ln(i(x, y) + ln(r(x ,y))

S(u, v) = F(z(x ,y)). H(u, v)

s(x, y) = F−1(S(u, v))

Being the H(u, v) the homomorphic filter in the frequency domain

Homomorphic filter in the frequency domain
Ideally YH (gamma high) and YL (gamma low) shouldn’t be the same or invert orders, so the filter can work.

where, from the formula

The filtered image however then comes from:

g(x, y) = exp(s(x, y))

Furthermore, below the step by step is represented through a histogram:

Homomorphic filter fluxogram (source)

Now I’m going to be explaining a little step by step to implementing the DFT for digital images; but first let’s remember how the coordinates for digital images are represented. On the part I introduction I mentioned that the (0,0) coordinate for digital images as being at the upper-left corner. If we DFT the image like that, the resulting image wouldn’t have the center on the image center. Thus, to helping this visualization and filter applications, it’s made what the figure below shows, so we have an image with the results centered on center. (Fourier plot for Moiré above is already like that).

DFT showing (Source)

Although this technique implies a squared image, this process can be done by a built-in OpenCV function shown later. Now, another observation before implementation is that there are some optimizations related to the DFT calculation according to the image size. OpenCV affirms that multiples of two, three and five produce better results, so it’s also needed to recalculate the image size and most likely make a padding.

To start things off let’s read a b&w image with bad exposure to light, get its optimal DFT sizes and start some global variables needed to making usage of the filter.

Now, having the ideal sizes we can make new borders sizes for the image, on the bottom and right (this can be done anyhow) with constant value. After the padding we can transform to the frequency domain and then make a shift mentioned earlier to be better represented (doc). A more complex idea is the manipulation before showing the result, since we have values in the real and complex dimensions.

Image chosen to apply the homomorphic filter
DFT for the image chosen

So, now with the image and the DFT in hands, it’s time to implement the filter itself. The code is commented according to the explanation before.

The homomorphic function is then called by the functions responsible to changing its parameters as the user changes the trackbars. The trackbars were defined at the main function and take a limit, a function related to the bar and a variable corresponding to the actual value. The bars were by pattern maintained in the 0–100 scale and are responsible to change the parameters of the filter.

Trackbars example

The D0, C, YL and YH are sensible parameters that weren’t implemented to treat the user inputs with much precision, meaning that, are implemented for experiencing sake.

For all parameters, was not allowed to become 0 even if the user sets it so it doesn’t cancel its effect out. The C parameter was reduced in scale because was seen to be a lot more sensible to saturation than the others, and YL — YH, meaning gamma low and high respectively, were implemented so they are always one greater than the other.

The result after playing with the bars is a picture with a lot better light exposure, as expected. The black borders were the padding implemented to solve the optimization problem, not interfering on the filter calculus.

Homomorphic filter result

That’s it, on this one saw more filtering and had some idea of the impact of the Fourier Transform for digital images. :)

Thank you for reading, to read more go:

1. Introduction to OpenCV with Python

2. Introduction to OpenCV with Python part II

4. Let’s play with image borders on OpenCV

5. Color quantization with Kmeans in OpenCV

All the codes can be found at my public repository at GitHub.

--

--