Shadow elimination process is widely used as a pre-processing operation in various video surveillance applications, such as environmental monitoring, motion detection, and security monitoring. Once detected, shadows in images are used for moving target detection in a video surveillance system and detection of target shape and size and finding the number of light sources and illumination conditions in natural images. Ignoring the existence of shadows in images can degrade the output quality.
Many research studies on shadow detection and removal have been carried out for the last two decades.
we can remove shadow on images using simple Open-CV techniques. This is most useful while pre-processing and OCR text detection in images.
we used techniques are:
- dilate
- mediunblur
- absdiff
- normalize
Dilation
- This operations consists of convolving an image A with some kernel ( B), which can have any shape or size, usually a square or circle.
- The kernel B has a defined anchor point, usually being the center of the kernel.
- As the kernel B is scanned over the image, we compute the maximal pixel value overlapped by B and replace the image pixel in the anchor point position with that maximal value. As you can deduce, this maximizing operation causes bright regions within an image to “grow” (therefore the name dilation).
img_dilation = cv2.dilate(img, kernel, iterations=1)
Meduinblur
Blurs an image using the median filter.
The function smoothes an image using the median filter with the ksize×ksize aperture. Each channel of a multi-channel image is processed independently. In-place operation is supported.
dst=cv.medianBlur(src, ksize[, dst])
https://docs.opencv.org/3.4/d4/d86/group__imgproc__filter.html#ga564869aa33e58769b4469101aac458f9
Absdiff
Calculates the per-element absolute difference between two arrays or between an array and a scalar.
cv.AbsDiffS(src, dst, value) → None
https://docs.opencv.org/2.4/modules/core/doc/operations_on_arrays.html
Normalize
Normalizes the norm or value range of an array.
cv2.normalize(src[, dst[, alpha[, beta[, norm_type[, dtype[, mask]]]]]]) → dstfrom PIL import Image
import re
from cv2 import cv2
import numpy as np
import pytesseract
from pytesseract import Output
from matplotlib import pyplot as plt
import en_core_web_sm
import json
import matplotlib.pyplot as plt
import matplotlib.image as mpimgimg = cv2.imread("shadow_image.jpg")def shadow_remove(img):
rgb_planes = cv2.split(img)
result_norm_planes = []
for plane in rgb_planes:
dilated_img = cv2.dilate(plane, np.ones((7,7), np.uint8))
bg_img = cv2.medianBlur(dilated_img, 21)
diff_img = 255 - cv2.absdiff(plane, bg_img)
norm_img = cv2.normalize(diff_img,None, alpha=0, beta=255, norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_8UC1)
result_norm_planes.append(norm_img)
shadowremov = cv2.merge(result_norm_planes)
return shadowremov#Shadow removal
shad = shadow_remove(img)
cv2.imwrite('after_shadow_remove1.jpg', shad)
Conclusion
The main conclusion was that only the simplest methods were suitable for generalization, but in almost every particular scenario the results could be significantly improved by adding assumptions. As a consequence, there was no single robust shadow detection technique and it was better for each particular application to develop its own technique according to the nature of the scene. Since that, many methods have been proposed based on color models, region, learning, and invariant image models .