Image Equalization (Contrast Enhancing) in Python

Sameer
Analytics Vidhya
Published in
5 min readNov 27, 2020

I have been practicing image processing for quite a little — manipulating the images (image matrices to be precise). In doing so, I got to explore the equalizing methods for images so as to enhance the contrast to a certain extent that the manipulated image looks better than the original image. This technique is termed as Histogram Equalization.

Often times what happens is when the image is captured, it will not be the same as the natural view. In order to meet the level of natural view, post-processing should be done. Hence Histogram Equalization (Normalization) is one of those techniques to enhance the contrast by tweaking the pixel values of the image.

An example can be seen below — original image and equalized image.

If we were to plot the image histograms, it would look something like below -

Credits — The above images have been taken from the Internet for showing the examples.

Importance of Histogram Equalization

  • This method works better for both bright and dark images, especially in the field of medical science there is higher importance in analyzing the X-ray images.
  • It is also very useful in viewing scientific images like thermal images and satellite images.

Implementation

In this article, I will implement this method both by using the openCV library and from scratch with just NumPy and Matplotlib. Although I would like to do without using NumPy, it would take much time to compute.

Image by Author

Note — For coding from scratch, I will use openCV to read the image and nothing else.

I have taken Lena Image for testing the functions. I have saved the same in my working directory.

Import the Requirements

Read the Image

The above function reads the image either in gray_scale or RGB and returns the image matrix.

Code Implementation with Library

For equalizing, we can simply use the equalizeHist() method available in the library cv2. We have two aspects here -

  1. When the image is read in RGB.
  • Separate the pixels based on the color combination. We can use the split() method available in the library cv2.
  • Apply the equalization method for each matrix.
  • Merge the equalized image matrices altogether with the method merge() available in the library cv2.

2. When the image is read in gray_scale.

3. Plot the original image and equalized image.

Let’s test the above function —

Image by Author
Image by Author

The above plots are clear and we can say that the equalized images look better than the original images. This was implemented using the cv2 library.

Code Implementation from Scratch

For this, I am using NumPy for all the matrix operations. Again we can do it with for loops, but it will take more time to compute. Even here we have two aspects as before -

  1. When the image is read in RGB.
  • Separate the pixels based on the color combination. We can slice it down using NumPy operations.
  • Apply the equalization method for each matrix.
  • Merge the equalized image matrices altogether with the method dstack(tup=()) available in the library NumPy.

2. When the image is read in gray_scale.

3. Plot the original image and equalized image.

Let’s write our own function to compute the image equalization. Image pixel values are normally in the range of 0 to 255. So in total, we will have 256 pixels.

Credits — The above code is an inspiration from the article written by Tory Walker.

The above function returns an equalized image matrix when passed the original image matrix as an argument.

Let’s write another function that computes the equalization for both the RGB image and the gray_scale image taking the above function in use.

Let’s test the above function —

Image by Author
Image by Author

The above plots are clear and we can say that the equalized images look better than the original images. This was implemented from scratch using the NumPy library.

Comparison

Let’s compare the equalized image obtained from the cv2 library and the equalized image obtained from the code written from scratch.

Image by Author

We can notice there is a slight difference between the library image and scratch image. But both seem to be clear when compared with the original image. Here I complete my article with my own takeaway.

Takeaway

  • Personally, I learned a lot by exploring and implementing different methods applied to increase image intensity. Especially, trying to implement the code from scratch by both referring and learning.
  • It is always good to use the library methods as they seem to be more optimized and works 100 percent.
  • Image processing is a very crucial subject to learn and one really deserves to try out practicing with so much curiosity and ones’ own exploration.

Do give a read to my other articles and let me know your thoughts —

  1. Image Mirroring and Flipping
  2. Image Convolution

If you liked it, you can buy coffee for me from here.

--

--