Detect Face in Bad Lighting Condition Using Color Histograms

zong fan
4 min readMar 25, 2018

--

Face recognition is an awesome computer vision technology which is widely used across many technology industry fields (em… it seems it has fed a lot of companies and developers). Beside the fantastic deep learning network frameworks and novel but efficient loss function for optimizing during training (such as triplet loss, center loss, A-softmax loss, cosine loss and so on), one thing is very important: preparing a good face as your model input. Of course, there is no absolute standard to say what is a good face. But generally, a good face must be clear (not blurred and distorted), good exposed (not too dark or over-bright). This post mainly tries to deal with latter situation.

Figure 1. Left is a good face; middle is a face under strong bright sunlight; right is an over-exposed face

Normally, a dark image mainly consists of dark dots or pixels whose values are close to 0; and an over-exposed image, in contrast, is composed of large amounts of bright pixels with high value close to 255. Let’s first see a dark image and its color histogram (here we use calcHist function in opencv-python package to do this work):

import cv2
import numpy as np
from matplotlib.pyplot as plt
frame = cv2.imread("test.jpg")
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
# gray = hsv[:, :, 2]
hist = cv2.calcHist([gray], [0], None, [256], [0, 256])
plt.figure()
plt.title("Grayscale Histogram")
plt.xlabel("Bins")
plt.ylabel("# of Pixels")
plt.plot(hist)
plt.xlim([0, 256])
plt.show()

Simply explain:

  • First we convert BGR image to GRAY format and the resulting value represent the grayscale intensity of corresponding pixel (or you could covert to HSV format and use last one).
  • In calcHist function,[0] means first channel of image to analyze; [256] is the number of bins for selected plane to group into; [0, 256] indicate the boundary of values will be grouped. See opencv document (here) for detailed explanation. None means mask, we will not use here (Using masks could only compute a histogram for a particular region of input image).

Then the resulting histogram of previous 3 sample images looks like this:

Figure 2. Grayscale histogram of good face, dark face and over-exposed face.

So as we can see, good face image’s grayscale value peaks at around 200 while still distributing in wide range. Otherwise, dark face image concentrates on much lower values around 0 to 20 and over-exposed face image clusters around 255. In this case, we just need to check the fraction of extreme low/high value pixels versus total image pixels: if the proportion overpass some threshold, input face image could be treated as underexposed/overexposed face.

Em, if the purpose is just to check the fraction of pixels within specified range of values, many opencv or numpy function could reach it, like cv2.inRange function:

bright_thres = 0.5
dark_thres = 0.4
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
dark_part = cv2.inRange(gray, 0, 30)
bright_part = cv2.inRange(gray, 220, 255)
# use histogram
# dark_pixel = np.sum(hist[:30])
# bright_pixel = np.sum(hist[220:256])
total_pixel = np.size(gray)
dark_pixel = np.sum(dark_part > 0)
bright_pixel = np.sum(bright_part > 0)
if dark_pixel/total_pixel > bright_thres:
print("Face is underexposed!")
if bright_pixel/total_pixel > dark_thres:
print("Face is overexposed!")
  • cv2.inRange() function return a binary mask corresponding to initial gray image, where white pixels (255) represent initial pixels fall into upper and lower boundary and black pixels (0) do not.
  • Here pixel value between 0 to 30 will be treated as dark pixel and 220 to 255 is the limit range for bright pixel.
  • If we use hist information obtained in previous code snippet, numpy slice is a convenient way for it’s sorted in ascending order from 0 to 255.

For previous images: good face’s dark pixels fraction is 0.067 and bright pixels is 0.2; underexposed face is 0.4 and 0.009; over-exposed image is 0.025 and 0.65. Here it is! Our simple algorithm for judging underexposed and overexposed face works as expected.

But anyway, this simple classifier is not robust enough to work properly under distinct environment. Extract image feature (Hog, CNN, …) and then train a classifier is a better choice if you have enough positive and negative samples.

Reference:

  1. https://www.pyimagesearch.com/2014/08/04/opencv-python-color-detection/
  2. https://docs.opencv.org/3.3.1/dd/d0d/tutorial_py_2d_histogram.html

--

--