Enhance your image in three ways in Python

Misha Ysabel
Data Caffeine
Published in
8 min readFeb 2, 2021

Have you ever took a photo and realized that the lighting is too dark? Or that the image had too many noises? There was a time when most of my photos had a blue tint because my phone’s UV filter was damaged. It was annoying and awful to see when looking through my gallery. Most of these issues can be solved by image enhancements. We can enhance these with three different techniques.

First, let us tackle the Fourier Transform technique with a sample image of a moon crater. As you can observe, the image has hints of white horizontal lines on the image. So what can Fourier Transform do to remove these lines? We know for a fact that these white lines are frequently appearing in the image.

moon crater

The Fourier transform of the image can reveal periodic patterns and artifacts of the image. So let us check the Fourier transform with four code lines by using the NumPy and Scikit Image libraries.

# Import Library
import numpy as np
from skimage.io import imread, imshow, rgb2gray
# Displaying both image and fourier transform
orbiter = rgb2gray(imread('lunar_orbiter.jpg'))
orbiter_fft = np.fft.fftshift(np.fft.fft2(orbiter))
Left: Original Image; Right: Fourier Transform of the Moon Crater

The white lines are causing the vertical line to appear in the middle of the Fourier Transform. We can now remove the white lines by replacing the vertical line with a small value. Then, we use the modified Fourier image back to the spatial domain with another set of four code lines.

orbiter_fft2 = orbiter_fft.copy()
orbiter_fft2[:280,orbiter_fft.shape[1]//2] = 1
orbiter_fft2[-280:,orbiter_fft.shape[1]//2] = 1
imshow(np.log(abs(orbiter_fft2)), cmap='gray');
Fourier transform with the suppressed vertical line.

The vertical line is not connected in the middle because connecting the line may remove low-frequency patterns or artifacts in the image. Now, we can check the resulting image.

imshow(abs(np.fft.ifft2(orbiter_fft2)), cmap='gray');
Transformed image with Fourier Transform

Voila! The image does not any have any more unnecessary noises.

Second, let us tackle the white balancing technique with another sample image. There are three white balancing algorithms that we will demonstrate — white patch algorithm, ground-truth algorithm, and gray world algorithm.

Sample image for White Balancing Technique

We apply the white-balancing technique to the image to correct areas in images that should supposedly appear white or neutral. For example, the image has hexagon shelves that appear white, but the shelves have some blue tints on them. Hence, we want to make the shelves truly appear white.

Let us try correcting with the white patch algorithm. This algorithm understands that the color white corresponds to having values of 255 in each RGB channel. Thus, the white patch algorithm rescales the image based on the maximum allowable value for each RGB channel. Below, the code snippet will divide each channel by its maximum.

access = skio.imread('wb_blue.png')
access_wp = img_as_ubyte(access*1.0 / access.max(axis=(0,1)))
skio.imshow(access_wp);
print('Max value for red channel is: '+str(np.max(access[:,:,0])))
print('Max value for green channel is: '+str(np.max(access[:,:,1])))
print('Max value for blue channel is: '+str(np.max(access[:,:,2])))
Max value for red channel is: 255
Max value for green channel is: 255
Max value for blue channel is: 255

The maximum values for all the channels are 255, which means the algorithm fails, and the image is still the same as the original. Hence, we have to look at the histogram of the pixel values. Do not forget to import matplotlib.pyplot library.

# Import Library
import matplotlib.pyplot as plt
# Histogram
for channel, color in enumerate('rgb'):
print(channel)
channel_values = access[:,:,channel]
plt.step(np.arange(256),
np.bincount(channel_values.flatten(), minlength=256)*1.0/
channel_values.size,
c=color)
plt.xlim(0, 255)
plt.axvline(np.percentile(channel_values, 95), ls='--', c=color)
plt.xlabel('channel value')
plt.ylabel('fraction of pixels');
for channel, color in enumerate('RGB'):
print('99th percentile of %s channel:'%channel,
np.percentile(access[:,:,channel], 95))
Image Histogram of the RGB channel

The histogram reveals that we can use the 95th percentile as our maximum value instead of the absolute maximum. Hence, we use three code lines to get the RGB values at the 95th percentile. Afterward, we can finally renormalize our pixel values with the new values.

# Getting the 95th percentile values of each channel
for channel, color in enumerate('RGB'):
print('95th percentile of %s channel:'%channel,
np.percentile(access[:,:,channel], 95))
# 95th Percentile values on each channel
95th percentile of 0 channel: 94.0
95th percentile of 1 channel: 207.0
95th percentile of 2 channel: 254.0
# White patch algorithm with 95th percentile values
access_wp2 = img_as_ubyte((access*1.0 / np.percentile(access, 95,
axis=(0, 1))).clip(0, 1))
skio.imshow(access_wp2)

The resulting image may still have some blue tint, but it definitely looks better than the original image. It would definitely need some further adjustments.

Let us move on with the gray-world algorithm. First, this algorithm assumes that average pixels are gray. This entails that the mean value of each channel is the same. Thus, it would be appropriate to use the pixel's mean value as the basis for normalizing the image. Using three code lines, we applied the gray-world algorithm on the same image.

access_gw = ((access * (access.mean() / access.mean(axis=(0, 1))))
.clip(0, 255).astype(int))
skio.imshow(access_gw);

The image did improve compared to the original, but the image looked a little “washed off.” This method may not be exactly the best technique for the image.

We may as well try the last white-balancing technique called Ground Truth Algorithm. This algorithm uses a manually-selected reference patch of the image, assuming that area should be perceived as white or be the mean value of each pixel. The reference patch is then used to rescale each channel. For the example, we chose one of the white walls in the image as the reference patch.

from matplotlib.patches import Rectangle
fig, ax = plt.subplots()
ax.imshow(access)
ax.add_patch(Rectangle((370, 120), 50, 50, edgecolor='r', facecolor='none'))
access_patch = access[120:170, 370:420]
skio.imshow(access_patch);
Left: Image with Reference Patch; Right: Isolated reference patch

We use the reference patch to normalize each channel to the maximum or mean value of each channel of the region.

# Maximum Value
access_gt_max=(access*1.0/access_patch.max(axis=(0, 1))).clip(0,1)
skio.imshow(access_gt_max)
# Mean Value
access_gt_max=(access*1.0/access_patch.max(axis=(0, 1))).clip(0, 1)
skio.imshow(access_gt_max)
Left: Maximum value; Right: Mean value

Both approaches have their advantages, but I would prefer to use the maximum value approach for this image. The light rays look more natural and not overexposed.

On to the last image enhancement technique called Histogram Manipulation. This technique controls the explore of an image. Different sensor and lighting conditions may sometimes cause our images to either overexpose or underexpose. We will be looking at three approaches for manipulating the histogram using an underexposed image as an example.

Underexposed Image

First, we should deal with histogram equalization with the code snippet below.

import matplotlib.pyplot as plt
from skimage.color import rgb2gray
from skimage.exposure import histogram, cumulative_distribution
dark_image_intensity = img_as_ubyte(rgb2gray(dark_image))
freq, bins = histogram(dark_image_intensity)
plt.step(bins, freq*1.0/freq.sum())
plt.xlabel('intensity value')
plt.ylabel('fraction of pixels');

As predicted, the histogram is right-skewed because most of the pixels have low-intensity values. We have to be able to make the distribution more uniform. We can definitely check the cumulative distribution function (CDF) of the image and the target CDF with another code snippet below.

freq, bins = cumulative_distribution(dark_image_intensity)
target_bins = np.arange(255)
target_freq = np.linspace(0, 1, len(target_bins))
plt.step(bins, freq, c='b', label='actual cdf')
plt.plot(target_bins, target_freq, c='r', label='target cdf')
plt.plot([50, 50, target_bins[-11], target_bins[-11]],
[0, freq[50], freq[50], 0],
'k--',
label='example lookup')
plt.legend()
plt.xlim(0, 255)
plt.ylim(0, 1)
plt.xlabel('intensity values')
plt.ylabel('cumulative fraction of pixels');

We will try to match the actual CDF to the targeted CDF in the snippet code below. We will use the percentile of each intensity value in the actual CDF, then we will replace the intensity value in the actual CDF with the corresponding intensity value that matches the percentile in the target CDF. For example, 50 is the intensity value in the 95.974th percentile in the actual CDF. We will replace this value of 50 with 244, the intensity value in the 95.947th percentile of the target CDF.

dark_image_244 = dark_image_intensity.copy()
dark_image_244[dark_image_244==50] = 244
imshow(dark_image_244,cmap='gray');
new_vals = np.interp(freq, target_freq, target_bins)
dark_image_eq = img_as_ubyte(new_vals[dark_image_intensity].astype(int))
imshow(dark_image_eq);

Afterward, we can do it with all intensity values by interpolation. As a result, the right image has better exposure.

Not all target distribution should be uniformed; you may pick any distribution that better suits the image. We can possibly use a Gaussian distribution as the target since this is a common distribution.

from scipy.stats import norm
gaussian = norm(128, 64)
plt.plot(gaussian.pdf(np.arange(0,256)))
plt.ylabel('PDF', color='b')
plt.twinx()
plt.plot(gaussian.cdf(np.arange(0,256)), c='r')
plt.ylabel('CDF', color='r')
plt.xlim(0, 255)
plt.ylim(ymin=0)
new_vals = np.interp(freq, gaussian.cdf(np.arange(0,256)), np.arange(0,256))
dark_image_gauss = img_as_ubyte(new_vals[dark_image_intensity].astype(int))
imshow(dark_image_gauss);
new_vals = np.interp(freq, gaussian.cdf(np.arange(0,256)), np.arange(0,256))
dark_image_gauss = img_as_ubyte(new_vals[dark_image_intensity].astype(int))
imshow(dark_image_gauss);

The Gaussian distribution of colors kind of lost some of the black hue, but there are some details that are more clear and balanced. This might be a good target distribution to use for other images.

We can also opt to not manipulate intensity values in images to match a distribution. We would have to use another approach called contrast stretching. This approach allows us to rescale intensities within a certain percentile range. We will use the skimage.exposure.rescale_intensity for this process.

from skimage.exposure import rescale_intensity# for this example, stretch values between 2nd and 98th percentile
dark_image_contrast = rescale_intensity(dark_image_intensity,
in_range=tuple(np.percentile(dark_image_intensity, (5, 95))))
imshow(dark_image_contrast);
dark_image_intensity = img_as_ubyte(rgb2gray(dark_image_contrast))
freq, bins = histogram(dark_image_intensity)
plt.step(bins, freq*1.0/freq.sum())
plt.xlabel('intensity value')
plt.ylabel('fraction of pixels');
Left: Histogram; Right: Image that was transformed with contrast stretching

As you can see, the histogram showed that the pixel value at 250 stretched while maintaining the right-skewness. Hence, the image maintained its black hue while making more image details more prominent. This method could definitely work too.

Overall, we can use these three ways of enhancing images can save our photos.

--

--