Adjust Local Brightness for Image Augmentation

zong fan
3 min readJun 25, 2018

--

Adjusting brightness, contrast and hue of image is a common image augmentation method widely used in image processing. But in some case, we’d like to add some shadow or bright spot in image to simulate complicated light condition in real situation. Such light condition manipulation method could help boost diversity of input images while without extra effort to record more images under real environment. Some recent work utilized deep learning method to gain better simulation effect, like the face pose and light augmentation for face recognition in this work:

Source: Dataset Augmentation for Pose and Lighting Invariant Face Recognition

But this work need 3D model reconstruction which itself is a very complicated task. So here we just introduce a simple local augmentation method using OpenCV only. Of course the generated image may not be so reasonable and realistic, but anyway, model performance is the best rule to judge if the augmentation method is good or not. At least in our case, this simple method works pretty well!

Here we assume there are 2 kind light source: parallel light and spot light. In former situation, the light intensity should decay in parallel to the light strip direction with their distance increasing; while the light intensity should decay radically from the spot light in latter situation.

The basic synthesis idea is:

  1. Generate brightness mask emitted from light source
  2. Apply this mask to initial image and merge them

1. Parallel light (Single source allowed)

Brightness Decay Function

First, we define the function how light intensity decays. We propose 2 ways: Gaussian or Linear. The light intensity follows Gaussian distribution in former case while decay rate is static in the later one.

def _decayed_value_in_norm(x, max_value, min_value, center, range):
"""
decay from max value to min value following Gaussian/Normal distribution
"""
radius = range / 3
center_prob = norm.pdf(center, center, radius)
x_prob = norm.pdf(x, center, radius)
x_value = (x_prob / center_prob) * (max_value - min_value) + min_value
return x_value
def _decayed_value_in_linear(x, max_value, padding_center, decay_rate):
"""
decay from max value to min value with static linear decay rate.
"""
x_value = max_value - abs(padding_center - x) * decay_rate
if x_value < 0:
x_value = 1
return x_value

x: input location to calculate the light condition.

Mask Generation

Merge mask to image

def add_parallel_light(image, light_position=None, direction=None, max_brightness=255, min_brightness=0,
mode="gaussian", linear_decay_rate=None, transparency=None):
"""
Add mask generated from parallel light to given image
"""
if transparency is None:
transparency = random.uniform(0.5, 0.85)
frame = cv2.imread(image)
height, width, _ = frame.shape
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
mask = generate_parallel_light_mask(mask_size=(width, height),
position=light_position,
direction=direction,
max_brightness=max_brightness,
min_brightness=min_brightness,
mode=mode,
linear_decay_rate=linear_decay_rate)
hsv[:, :, 2] = hsv[:, :, 2] * transparency + mask * (1 - transparency)
frame = cv2.cvtColor(hsv, cv2.COLOR_HSV2BGR)
frame[frame > 255] = 255
frame = np.asarray(frame, dtype=np.uint8)
return frame
Left: initial image; Right: image after parallel light augmentation

2. Spot light (Multiple Source Allowed)

Left: initial image; Right: image after radical light augmentation

Summary:

In some case, the generated image seems to be weird when the light source location is not suitable. But any way, image processing is a important part in CV model training and maybe such “bad “ image could help improve model robustness like other image jitter methods.

Reference:

  1. https://stackoverflow.com/questions/46466563/fill-circle-with-gradient/46466875

--

--