Shadow removal from historical aerial images
Exploring both remote sensing and computer vision based methods
In the optical remote sensing, shadow would be a factor affecting accuracy and determination for the land cover mapping across both historical (black and white images) and recent (full color images). The fact that produce shadow was based on the sun illumination (solar azimuth and zenith angles), camera view points (elevations) and terrain (slope and aspect profile). To minimize the effect of shadows to the land cover mapping, exploring methods to deal with those shadow regions is needed to enhance the accuracy of vegetation cover mapping and further change detections.
Usually, in remote sensing they will called this process as “Topographic correction”, still in this artical the shadow removal is a bit different from topographic correction. Since the topographic correction means to reduce or even remove the impact of terrain to the imaging. Still for the shadow removal, it only account for the shadow regions and recomputing those region pixel that closer to “without” shadow status, therefore some hilly places with darker pixel will be still “remains”.
Source of shadows
As mentioned in the intro, the source of shadows as the following:
- Sun illumination (solar azimuth and zenith angles)
- Camera view points (elevations)
- Terrain (slope and aspect profile)
Typical remote sensing methods for shadow removal
For the typical method in remote sensing, they will make use of the digital terrain model to simulate the slope and aspect in order to know the shade areas, then normalised the slope and aspect for minnaert correction. After the minnaert correction, estimated the coefficient with the surrounding pixels would be the final step to remove the shadow from the aerial images.
Still, the simulated paramters such as solar angles (azimuth and zenith) and the minnaert correction may not effectively derive and remove the shadow for hilly terrain (show in the figure). So, another ways is to do the multi-source data combination or entropy estimation (developed by Google) for the shadow removal.
Entropy estimation approches developed by Google
The illumination changes in imaging increase the entropy of the observed texture intensity, and that texture in imaging increases the entropy of the illumination function. If we want to separate the image with it texture and illumination, it can done by minimizing the entropies of each component. Minimizing one quanity along can simply transfer the entire energy to others quantiy. The constraint on illumination entropy serves as a regularization and impositing smoothness on it.
By a non-parametric kernel-based quadratic entropy formulation for estimating texture and illumination densities. It can efficient do the multi-scale iterative optimization algorithm to minimize the resulting energy functional. The proposed method is particularly suitable for aerial images that consist of distinctive texture patterns, such as building facades, or soft shadows with large diffuse regions, such as cloud shadows.
Ways to Separation of Image Components
Separate the observed image I(x, y) into its texture component R(x, y) and illumination component L(x, y).
Assume that the image I can be expressed as the sum of these two components:
I(x, y) = L(x, y) + R(x, y).
The texture component represents the underlying texture patterns in the image, while the illumination component represents the overall lighting conditions.
Entropy Formulation
Entropy is a measure of uncertainty or randomness in a random variable. The authors observe that any change in illumination tends to increase the diversity (entropy) of observed texture intensities, and the presence of texture increases the entropy of the illumination function. Therefore, they utilize the concept of entropy to formulate the separation of image components.
The entropy of a random variable X is denoted as H(X).
In this case, the authors consider the entropies of the observed image I, the texture component R, and the illumination component L. They state that the entropy of the observed image I is greater than or equal to the entropies of its components:
H(I) ≥ H(R), H(L)
This inequality suggests that by minimizing the entropies of the texture and illumination components, the overall entropy of the observed image can be reduced.
Multi-source data combination
By combining data from multiple sources, such as satellite imagery and aerial photograph, more information becomes available for shade removal. Different data sources may capture different aspects of the scene, and their combination can provide a more comprehensive understanding of the shading patterns and characteristics. Multi-source data fusion allows for the integration of complementary information, which can lead to improved accuracy in shade removal.
Image processing based method for shadow removal
There are some shadow removal algrithms already avaliable to trace and remove those shaded areas, such as the “image shadow remover”. This Python package use the Murali, Saritha, and V. K. Govindan’s approches to do the shadow detection and removal.
For the shadow detection, they first convert the RGB image (bands) into LAB color space and the shadow pixels have low values in both L and B channels. So, the slassify pixels as shadow if L < T1 and B < T2, where T1 and T2 are thresholds and apply morphological operations to refine shadow mask. Since then, they only keep shadow regions with size > T3 to eliminate misclassifications. Besides, for the shadow removal, each identified shadow region as follows:
- Calculate average R,G,B values inside shadow region: Rin, Gin, Bin
- Calculate average R,G,B values just outside shadow region: Rout, Gout, Bout
- Compute constant for each channel:
KR = Rout/Rin, KG = Gout/Gin, KB = Bout/Bin
- Multiply each pixel in shadow region by corresponding constant:
R’ = KR * R
G’ = KG * G
B’ = KB * B
This scales the color channels of the shadow regions to match the illumination levels outside the shadow. For the post-processing, the pixels near shadow edges are not as dark as interior shadow pixels. Also, multiplying these edge pixels by the constant can over-illuminate them and finally apply median filter on edge pixels to eliminate this over-illumination.
Generally, the image processing based method is impressive and the result is better than those remote sensing based methods, such as Multi-source data combination and C-coefficient or minnaert correction. Still, there are a long way to enhance and improve those shadow removal results in order to interpret the histroical aerial images better for vegetation cover mapping and long term change detection purpose.
Reference list
Dare, Paul. (2005). Shadow Analysis in High-Resolution Satellite Imagery of Urban Areas. Photogrammetric Engineering & Remote Sensing. 71. 169–177. 10.14358/PERS.71.2.169.
Kwatra, Vivek & Han, Mei & Dai, Shengyang. (2012). Shadow removal for aerial imagery by information theoretic intrinsic image analysis. 10.1109/ICCPhot.2012.6215222.
Murali, Saritha, and V. K. Govindan. “Removal of shadows from a single image.” the Proceedings of First International Conference on Futuristic Trends in Computer Science and Engineering. Vol. 4.
Murali, Saritha, and V. K. Govindan. “Shadow detection and removal from a single image using LAB color space.” Cybernetics and information technologies 13.1 (2013): 95–103.
Reference and Acknowledgments
This is a volunteering pilot study from the Team of Forestree, Remote Sensing and Forestry, used to study the close-ranging photogrammetry, image processing and computer vision.