Looking at the right angle — an easy Sentinel 2 BRDF correction within Sentinel Hub

A simple implementation of angular normalisation for Sentinel-2 reflectance.

Maxim Lamare
Sentinel Hub Blog
7 min readApr 13, 2023

--

Satellite image of the Australian desert, where Orbit stripes are visible
Sentinel-2 cloudless mosaic of the Northern Territory, Australia (November 2019). 🌐 EOBrowser.

Bidirectional reflectance distribution function or BRDF… you may not have heard of this term before, but you have already experienced it in your everyday life without knowing it. Have you walked along the seashore and observed the sparkling sunlight bouncing off the rippled surface of the water? Or maybe you have been blinded by the reflection of the sun bouncing off a glass building downtown? Congratulations, you have just observed the forward scattering of light from a surface (a.k.a. specular reflectance), and thus experienced BRDF!

Photo of the sea surface with sun reflecting off the water
An example of sun-glint on the Mediterranean sea, Sardinia, 2015.

So what is BRDF exactly, and why does it matter when working with satellite images? Simply put, BRDF describes how light is reflected off opaque surfaces in different directions depending on the location of the light source (professor Schaaf’s webpage provides an excellent introduction to the term with some great visuals). However, once you start digging further into the subject, it gets quickly very complicated, owing to the numerous parameters that influence BRDF (e.g. the optical/physical parameters of the surface). Moreover, the more pedantic of our readers will point out (and rightly so) that one cannot simply measure BRDF, as it is a theoretical quantity. For simplicity’s sake, we will stick to the term BRDF here, but if you want to know more about the exact terms and quantities measured in remote sensing, I recommend to dive into the bible of BRDF written by Schaepman-Strub et al., 2006.

Figure illustrating the angles used to describe BRDF.
Representation of the geometries commonly used to describe BRDF (Source: Schill et al., 2004).

In the field of Remote Sensing, BRDF comes most often into play when calculating spectral albedo, since in order to compute the fraction of downwelling radiation reflected from a surface, one needs to know the angular dependence of the reflectance (i.e. how surfaces scatter light in different directions). Most optical polar-orbiting satellites used by the community, such as Landsat or Sentinel-2, provide images with fixed illumination and viewing angles for each pixel. Therefore, to be able to derive albedo from these sensors, one needs to infer the angular distribution of the surface reflectance. The tricky part is that each surface type distributes light differently, as shown in the figure below. And that’s not taking into account other factors such as surface roughness! For an accurate calculation of albedo, you would need to know the BRDF for each surface type present in your image, and apply the right correction to the right pixels… good luck with that!

Polar plots of BRDF for different surfaces
Polar plots of the angular dependency of reflectance for 4 type of surfaces, nicely shown in Gatebe & King, 2016.

Knowing the BRDF of the surfaces observed in a satellite image is also useful to normalise images when looking at large areas, since the pixels from one side of the image to the other will be observed at different angles. This is also true for the normalisation of image composites where multiple orbits are overlapping, such as in the image below.

Satellite image of the Greenland ice-sheet with a clear difference between orbits
Can you spot the orbit change? Sentinel-2 L2A images acquired over 2 orbits over the Greenland ice-sheet east of Ilulissat (July 2022). 🌐 EOBrowser.

So why bother writing a blog post about the normalisation of angular reflectance, if it is so complicated and tedious? Well, it turns out that a lot of clever people have been thinking about the problem over the last decades, and some have come up with approximations that — overall — do quite a good job. The idea of implementing a BRDF normalisation for Sentinel-2 scenes in Sentinel Hub was sparked by a forum post that was published around the same time as a medium blog post on how to create global mosaics. After carefully reading through the scientific literature on the subject, our Sentinel Hub team in Austria thought it would be fun to take on the challenge and propose an algorithm directly usable in your web browser!

Two colleagues from Sentinel Hub Austria working on a computer.
We love this kind of geeky challenge in our Austrian Sentinel Hub team! Here, Max and Dorothy are hard at work solving the BRDF problem!

The method we chose, because of its universality and ease of implementation was the “semi-empirical BRDF normalisation c-factor correction” proposed by Lucht et al. 2000. This method simplifies the complex theory by making assumptions and approximations, and describes BRDF using the Ross–Li Model, which is a combination of three components (called kernels) describing surface reflectance: isotropic scattering, volumetric scattering, and geometric-optical surface scattering (see equations 37, 38 and 39 in the paper). To avoid lookup tables and other complex modelling approaches that could not be implemented in an Evalscript, we use the fixed BRDF spectral model parameters derived from “the global year of highest quality snow-free MODIS BRDF product” conveniently defined for Sentinel-2 in Roy et al. 2017 that were derived from the Landsat parameters in Roy et al. 2016. Of course, the chosen method is not perfect, as pointed out by our friends at CESBIO in France, but it is universally applicable and does show decent results in a lot of cases.

Theory is nice… but now show us some results!

To visually assess the corrections applied in our Evalscript, we picked a selection of scenes acquired across the World, each with different surfaces types (sand, forest, grassland…). For each of the examples we provide a Copernicus Browser link (read more about Copernicus Data Space Ecosystem in our Medium post) to the uncorrected and corrected images in the caption, so that you can explore the scenes yourself.

In the following scenes, acquired over the Australian desert, a strong distinction between 2 orbits can be seen in the uncorrected True Colour composite, despite the images only being taken 2 days apart. The BRDF correction is — in our opinion — quite impressive over the distinctive red sands found in this part of the world!

Animated before/after normalisation of a sartellite image of the Australian desert
Desert on the border between Queensland and South Australia, Australia (31/03/2022). 🌐 Uncorrected / Corrected.

The correction also seems to work well over forested areas, as shown in the following scenes acquired over the Amazon, on the border between Brazil and Paraguay. The edge effects observed between two orbits are significantly reduced, making the image composite look more uniform.

Animated before/after normalisation of a satellite image of the Amazon rainforest
Mix of forest and savanna on the border between Brazil and Paraguay (31/03/2022). 🌐 Uncorrected / Corrected.

We could have stopped here, and only shown cases where the correction applied to True Colour composites produces nice results. However, this is not always the case, as shown in the following two examples, and it is only fair that we also show limitations to the method!

Over grassland scenes in the USA, the correction does help remove some of the angular differences in reflectance from an orbit to another, however artifacts of the correction are still visible in the corrected product. The “staircase” effect that can be seen in the image below is due to the angular data having a pixel spacing of 5000 m and not perfectly matching up on the edge of the image. It is also noticeable that the correction breaks-down over water bodies, as seen over Lake Lowell in the southern part of the image. Nevertheless, the same problem is observable in the uncorrected images.

Animated before/after normalisation of a satellite image of the grassland in the USA
Grassland east of Boise, Idaho, USA (27/06/2022). 🌐 Uncorrected / Corrected.

In our last example over the Nevada desert in the USA, the BRDF correction hardly plays a role in normalising the angular reflectance effects at the border between two orbits, showing that the method isn’t perfect and cannot be applied blindly to all surfaces. We didn’t show any examples of snowy scenes here either: since the algorithm was calibrated for snow-free surfaces with high sun angles and that snow has a very peculiar forward-scattering response, the correction doesn’t work for high-latitudes and snow-covered surfaces.

Animated before/after normalisation of a satellite image of the desert in the USA
Nevada desert, USA (24/11/2022). 🌐 Uncorrected / Corrected.

Despite some limitations, we still believe having a reflectance normalisation script that is applicable in Copernicus Browser or Sentinel Hub’s EO Browser is useful for many applications, and hope to see some use by the community.

Now you know all about BRDF, it’s your turn to normalise!

If you want to apply the Evalscript to your workflow, you can simply copy it from our Custom Scripts page and adjust the bands you want to apply the normalisation to!

We also look forward to hearing your comments about the method applied, and if you have suggestions for improvements, don’t hesitate to contribute to our GitHub repository or start the discussion in our community forum.

Many thanks to Jonas Viehweger and Adrian Di Paulo from the Sentinel Hub team who helped implement the equations into an Evalscript!

--

--