Hurricane Douglas cloud top pressure seen by Sentinel-5P TROPOMI on 24th July 2020 (🌐EO Browser). Sentinel-3 OLCI images are displayed where there is no Sentinel-5P data. This image spans approximately 4500 km!

Data Fusion - combine satellite datasets to unlock new possibilities!

Maxim Lamare
Planet Stories
12 min readAug 19, 2020

--

Authors: Max Kampen, Maxim Lamare and Monja Šebela

We are excited to announce that Sentinel Hub now supports data fusion! Merge data from different satellites in your processing and use the advantages of each dataset in a single result!

Nowadays, we have access to an incredible amount of data produced by a large fleet of satellite sensors constantly scrutinising the Earth’s surface. The variety of sensors employed, all with different temporal, spatial, and spectral characteristics, offers a wealth of information about Earth processes and allows for smarter decision-making in our fast-changing world. Data fusion expands these possibilities even further by combining measurements from different sensors, providing enriched information compared to exploiting the same data sources individually.

In this blog, we will start by exploring a few examples of data fusion using different satellite sources, and then we will show you how to create your own data fusion scripts in EO Browser and Sentinel Hub process API to get you started.

Detection of built-up areas over Venice, Italy on 10th December 2019, using Sentinel-1, displaying all non-built-up pixels in Sentinel-2 True Color RGB. (🌐EO Browser)

Example Scripts

Replacing Sentinel-2 clouds with Sentinel-1 SAR data

True Color Sentinel-2 L1C image over Sevastopol, on 11th May 2020. (🌐EO Browser)

This custom script written by Pierre Markuse returns Sentinel-1 SAR data where our machine-learning-based cloud detection algorithm s2cloudless detected cloudy pixels in Sentinel-2 imagery. As you can see in the Sentinel-2 image on the left, a large portion of the Sevastopol area is obscured by hazy clouds.

The script makes use of the Sentinel-2 cloud mask and cloud probability bands, replacing clouded areas with Sentinel-1 data that is not hindered by atmospheric effects.

The resulting image is a combination of True Colour RGB pixels from Sentinel-2 and in place of clouds, a Sentinel-1 visualisation highlighting terrain features in shades of yellow and green and water bodies in blue. Sentinel-1’s penetration capabilities regarding clouds and even rain showers make it a very powerful tool, especially for data fusion applications.

In this Sentinel-2 L1C image of Sevastopol on 11th May 2020, pixels detected as cloudy by the s2cloudless algorithm are replaced by a combination of Sentinel-1 VV and VH bands. (🌐EO Browser)

Data:

NDVI with Sentinel-2 and Sentinel-1

Cloud cover is the Achilles’ heel of optical satellite imagery, and a lot of effort is put into obtaining cloudless images. For most applications, the solution when faced with a cloudy image is to wait for the next acquisition. However, this option is not practical in regularly cloudy areas or if the object of study rapidly changes, as is the case when monitoring vegetation health. Indeed, a couple of cloudy days with an unlucky timing could lead to gaps in NDVI (Normalised Difference Vegetation Index) time-series of several weeks and potentially missing critical periods of plant physiology.

Centre-pivot irrigation fields in Kansas, RGB Sentinel-2 image, acquired on 26th April 2019. (🌐EO Browser)
Sentinel-2 based NDVI of centre-pivot irrigation fields calculated from the image above. Values below 0 appear in shades of red, and values between 0–1 range from light green to dark green. A lot of information is lost on the right side of the image, due to clouds. (🌐EO Browser)

By combining optical imagery with SAR (Synthetic Aperture Radar), the shortcomings of cloud cover can be (partly) mitigated. Although the revisit frequency of Sentinel-1 is similar to Sentinel-2 (approximately 6 days at the equator), the additional data may be used to increase the temporal resolution of time-series regardless of local atmospheric conditions.

Roberto Filgueiras and his colleagues from the Federal University of Viçosa in Brazil recently published a scientific article on the relationship between radar backscattering data and NDVI. Applying the authors’ method, NDVI (an optical index) can be computed from Sentinel-1 images.

In this example script, we calculated NDVI from a Sentinel-2 scene over Kansas, USA, and replaced pixels masked by clouds (detected with Sentinel Hub’s cloud detector) with NDVI derived from a Sentinel-1 image acquired on the same day. Although the Sentinel-1 NDVI pixels are somewhat noisy, the data fusion product provides essential information on which fields are being cultivated.

Data fusion (Sentinel-2 + Sentinel-1) NDVI of centre-pivot irrigation fields in Kansas acquired on 26th April 2019. Values below 0 appear in shades of red. Data fusion allows retrieving information despite clouds. (🌐EO Browser)

Data:

Pan-sharpening: Sentinel-2 / Landsat-8 and Sentinel-2 / Sentinel-3

Pan-sharpening, short for Panchromatic-sharpening, is a technique used in optical remote sensing that uses a high-resolution panchromatic image to sharpen a lower resolution multi-spectral image. A panchromatic image is a black and white representation of the surface’s brightness in a single band and usually covers a large range of wavelengths in the visible part of the light spectrum. On the other hand, a multispectral image separates light into multiple discrete spectral bands. The resulting product of Pan-sharpening is an image with a higher spectral resolution than the panchromatic image and a higher spatial resolution than the multi-spectral image. The best of two worlds!

Top: Landsat-8 True Color RGB (30m resolution) of Barcelona acquired on 22nd June 2020. Bottom: Landsat-8 panchromatic band 8 (15m resolution). (🌐EO Browser)

Satellites such as SPOT, Pleiades, or Landsat have sensors that provide both a panchromatic band and multi-spectral bands, allowing for easy pan-sharpening. For example, you can access pan-sharpened Landsat-8 images directly in EO Browser!

However, not all sensors offer the luxury of a panchromatic band. And even when they do, the resolution may still not be sufficient for certain applications. This is where data fusion comes into play…

Here, we show a basic example of pan-sharpening Landsat-8 images using a synthetic Sentinel-2 panchromatic band. The two satellites have similar wavelengths, providing an excellent opportunity to combine the data. Their synergistic use has the main advantage of increasing the frequency of acquisitions from 2 (Landsat-8) or 6 (Sentinel-2) to 8 per month. Because of the difference in resolution between the sensors (30 m for Landsat-8 and 10 m for Sentinel-2), a downscaling of the Landsat images to 10m resolution is performed so as not to lose the spatial information of Sentinel-2.

A large corpus of data fusion techniques can be found in the scientific literature, ranging from simple pixel-based approaches to more complex AI-based methods. Here, we used a simple averaging function to generate a panchromatic band from Sentinel-2 bands 2, 3, and 4. The resulting band was then combined with Landsat-8 bands 2, 3, and 4 to create the crisp True Color RGB image shown below.

Left: True Color RGB image acquired with Landsat-8 over Barcelona on the 21st May 2020. Centre: Synthetic panchromatic band from Sentinel-2 acquired on 22nd May 2020. Right: pan-sharpened Landsat-8 True Color RGB.

The Sentinel-3 constellation provides almost daily images covering wavelengths from the visible part of the spectrum to the thermal infrared, which is extremely useful to map rapid changes of the Earth’s surface. However, the 300-meter resolution misses many features such as most buildings, roads, and other linear features. Pierre Markuse shows an example of data fusion where he sharpens a recent Sentinel-3 OLCI scene with an older Sentinel-2 scene. It is exciting to see the potential for extracting detailed information at such a high temporal resolution, and this is only a first step in this direction.

Sentinel-3 True Color RGB of the East Frisian Islands, Germany on the 1st June 2020. The scene was sharpened using a Sentinel-2 image. (🌐EO Browser)

Data:

  • Landsat-8 pan-sharpened with Sentinel-2: access the CURL request in this Gist (EO Browser support coming soon).
  • Sharpened Sentinel-3 OLCI data can be explored in EO Browser.

Ship Detection with Sentinel-1 and Sentinel-2

Sentinel-1 is a robust tool for the continuous monitoring of ships, as the images are not affected by weather conditions or day-night cycles. Most ships show up in SAR images as bright pixels, as they are highly reflective in VV and VH polarisations. However, when a body of water is close to land, such as a river or a coastline, automated detection of ships can be fooled by buildings or land-based objects that act as corner reflectors, resulting in false positives. To remove the unwanted signal from the Sentinel-1 images, land features can be masked out using optical imagery by exploiting the spectral response of water. For example, the Normalized Difference Water Index (NDWI), based on the ratio between green and near-infrared bands, is efficient for water body mapping, as water strongly absorbs light in the visible and infrared electromagnetic spectrum.

Sentinel-1 VV decibel gamma0 band over the Yangtze river near Zhenjiang, China on 23th May 2020. Ships appear as bright pixels but can be confused with surrounding buildings. (🌐EO Browser)

This ship-detection script uses NDWI calculated from Sentinel-2 images to mask water bodies and returns white pixels wherever it detects high backscatter values in Sentinel-1 VV and VH polarisations. Although the script largely reduces the number of false detections, bridges and edges of narrow waterways are detected as maritime traffic, and a number of coloured ships often go undetected. Despite these drawbacks, the script can be useful to estimate ship traffic density automatically in rivers or close to shorelines. For visualisation purposes, all the pixels masked out by the NDWI product are mapped with Sentinel-2 True Color RGB.

Ship traffic on the Yangtze river, at Zhenjiang and Yangzhong, China on 23th May 2020. High backscatter areas detected with Sentinel-1 within the water bodies, which are masked using Sentinel-2’s NDWI product, are returned as white pixels. (🌐EO Browser)

Data:

Fire progression monitoring with Sentinel-2 and Sentinel-1

In this example, we monitor the progression of a forest fire that ravaged the border region of Paraguay and Bolivia in September 2019. The Sentinel-2 Short wave infrared (SWIR) composite (Red: Band 12, Green: Band 8, Blue: Band 4) lets us draw conclusions about water content in soil and plants, as water strongly reflects in SWIR wavelengths. Consequently, the visualisation is very useful for mapping fire damages, e.g. burn scars, because the moisture difference between burned areas and their unaffected surroundings contrasts very well.

Sentinel-2 SWIR composite of the forest fire in the border region of Paraguay and Bolivia on 7 September 2019. (🌐EO Browser)

When working with time-series, unfavourable weather conditions can disrupt optical data analysis. In our case, we could obtain one Sentinel-2 dataset with perfect conditions on 7th September 2019, but ran into problems with trying to analyse Sentinel-2 data from the next acquisition date on 12th September 2019 as our area of interest was completely overcast.

Left: S-2 SWIR composite from 7th September 2019 showing the burn scar with active forest fires. Right: a True Color image of the next acquisition date on 12th September 2019 showing completely overcast conditions.

One big advantage of using data fusion techniques is that the combined sensors can compensate for difficulties of the other. Especially for the monitoring of dynamic environmental disturbances like forest fires, a data gap means uncertainty and could hinder mitigation strategies and delay countermeasures. The Sentinel-1 SAR sensor seemed like the perfect supplement with its ability to penetrate clouds and the recorded backscatter conveying information about vegetation and soil moisture levels.

The first attempts of simply using VV and VH polarisation thresholds for the visualisation of burnt areas showed an overestimation of the extent and misclassified large parts of agricultural fields in the area as burnt soil. Following a methodology developed by Spanish and Australian researchers in 2019, a VH backscatter difference layer was created by subtracting the first from the latter dataset. Besides the custom script for Burned Area Visualisation, the difference layer served as additional input for the mapping of forest fire progression.

Left: Sentinel-1 image in VV polarisation mode on 12th September 2019 showing the propagation of the forest fire. Right: the calculated VH backscatter difference layer that was created by subtracting the first from the latter dataset.

Our multitemporal forest fire progression script produces a fire propagation map where areas that were already burned on 7th September 2019 are coloured in light yellow and the newly burned areas in red. By combining data from two different satellite sensors we can avoid data gaps and clearly monitor different development stages of the forest fire, even during highly inconsistent weather conditions.

Fire propagation map depicting previously burned areas from 7 September 2019 in light yellow (analysed with Sentinel-2 data), and newly burned areas in red (analysis based on Sentinel-1 data) on a Sentinel-2 SWIR composite background. (🌐EO Browser)

Data:

Sentinel-3 OLCI Background For Sentinel-5 Products

Data fusion doesn't necessarily imply complex data analysis and complicated algorithms. Merging datasets can also be very useful for visualisation purposes that help better understand the context of the data shown or simply create stunning pictures of the Earth and share them with EO Browser pins.

The main objective of the Sentinel-5P mission is to closely monitor our atmosphere. Level-2 products offer a variety of geophysical variables derived from the spectral measurements, including Cloud Top Pressure. This variable, calculated using a complex neural-network algorithm, gives valuable information on ongoing weather situations and is essential for weather forecasts.

In this example script, Sentinel-3 OLCI True Color RGB pixels are used to fill in gaps in Sentinel-5P cloud top pressure maps, helping to identify Hurricane Douglas as it approaches Hawaii. The script shows Sentinel-5P Cloud Top Pressure in bright colours, with blue shades representing low pressure and hues of orange and red representing high pressure. Where the Sentinel-5P pixels are transparent due to missing data, Sentinel-3 True Color is returned, giving an idea of the location of the acquisition (here the image was taken above the ocean, hence the blue colour) and creating a nicer visualisation. Given how easy it is to explore temporal data with EO Browser, we created an animation tracking the hurricane over 4 days as it was heading towards Hawaii.

Hurricane Douglas passing by Hawaii, 23rd to 27th July 2020. Cloud Top Pressure derived from Sentinel-5P images mapped over Sentinel-3 OLCI True Color scenes. (🌐EO Browser)

Data:

A mini-guide to data fusion in Sentinel Hub

Learn how to create your own data fusion scripts using EO Browser or Sentinel Hub’s Process API.

To enable the data fusion functionality with custom scripting in EO Browser, after selecting an image in the Discover tab you need to check the Use additional datasets option under Custom script in the Visualize tab. Here you can select the respective datasets that match the ones in your custom script. For example, if you based your initial search and visualisation on Sentinel-2 L1C and want to combine it with Sentinel-1 GRD, you need to enable S-1GRD as an additional dataset from the dropdown menu in the EO Browser interface, as shown below. Furthermore, when you enable a dataset, you also have the option to specify the mosaicking order and a separate timespan for each additional dataset, which comes in very handy when using multi-temporal scripts. Like in the fire progression monitoring example, you could select two additional Sentinel-1 datasets with different time ranges. Additionally, you can assign a Datasource alias to every enabled dataset, which needs to be called with the datasource parameter in your setup function of your evalscript (detailed explanations are following). Default values for the dataset alias are the following standard datasource identifiers: S2L1C, S2L2A, S1GRD, S3SLSTR, S3OLCI, S5PL2, L8L1C, DEM, and MODIS.

EO Browser Custom Script panel displaying data fusion options.

In the custom script itself, you need to state //VERSION=3 at the beginning of your code, as data fusion is only supported with V3 Evalscripts (a side note, from November 1st onward, V3 will be the only working script version for our services).

Additionally, you need to create a slightly different setup function than you might be used to. If you check the example setup function below, you’ll notice that the input section becomes a list in which you specify all datasources with the assigned Datasource alias and the used bands. The number of input bands also affects the number of processing units used, so only specify bands you really need (see our tips on writing more efficient scripts).

Example setup function for EO Browser

When using our Process API for data fusion, you need to set up all necessary datasets and parameters such as timeRange, mosaickingOrder… in the input section of the request body. In the payload, each dataset needs to be called with the standard datasource identifier under the key “type” (see example below). The process API allows the same flexibility as EO Browser in that you can freely assign an id (called Datasource alias in EO Browser) of your liking to each of the datasets, which you can then call with the datasource parameter in the setup function of your evalscript. If you are unsure about how to write a Process API request for data fusion, you will find valuable information in the documentation, which also contains the requests for the examples shown above.

Example request body intended for use in Sentinel Hub Process API

Below is an example of a setup function calling the customised id from the request payload shown above in the datasource parameters. You can then call the id values from the samples (i.e. samples.id) in the evaluatePixel function of your Evalscript and start playing with the data.

Example Evalscript containing the setup and evaluatePixel functions intended for use in Sentinel Hub Process API. Modified version of Pierre Markuse’s script.
A visualisation of Pierre Markuse’s cloud replacement data fusion script that substitutes cloud-covered pixels in Sentinel-2 imagery with Sentinel-1 SAR data in order to get information about the obscured land surface. The colourful patches are visualised radar backscatter of ground structures beneath the clouds over Ukraine on 11th May 2020. (🌐EO Browser)

We designed the example scripts above to showcase the new possibilities of data fusion. However, these are just a small step towards many exciting new applications. We are looking forward to seeing the results remote sensing experts and other users will come up with, unleashing the power of combined data sources. So the question now is: what incredible data fusion scripts are you going to create?

Example of a data fusion script created by a user.

If you have any questions or get stuck, you can engage with us and other users on our forum. Once you have a nice script you can share it with the world via our open repository for custom scripts, where we have a special data fusion section. And who knows, it could also help you win the third round of the Sentinel Hub Custom Script Contest!

Follow our Twitter and LinkedIn for news, so you don’t miss the latest developments, including new datasets and features!

--

--

Planet Stories
Planet Stories

Published in Planet Stories

Stories from the next generation satellite imagery platform

Maxim Lamare
Maxim Lamare

No responses yet