Ad hoc testing of algorithms globally
The main goal of the Sentinel-Hub services is to facilitate the use of satellite imagery.
This seemingly straightforward objective in reality encompasses a number of very non-trivial steps: finding the appropriate images depending on the requested time range, cloud coverage etc., extracting the needed pixels from the massive amount of data, combining them to create something sensible like the true colour image, projecting the image to the required coordinate reference system and so on.
When implementing the step in which the user would “create something sensible”, we spent quite some time thinking about various wishes that the users might have and how to organize the parameters and processing steps best. We wanted to give our users as many options as possible to define what “something sensible” is. What we came up with is the so-called “custom scripting”.
By employing the dynamically interpreted JavaScript language, and providing some specialized functions, we have liberated the users from pre-defined EO product formulas and simple 3-band combinations, and gave them an unrestricted playground to combine the bands of multispectral satellite data in unprecedented ways.
Testing your (pixel based for now) algorithm has never been simpler, nor faster. Let’s try it on an example.
A paper about cloud detection (A. Hollstein et al.: Ready-to-Use Methods for the Detection of Clouds, Cirrus, Snow, Shadow, Water and Clear Sky Pixels in Sentinel-2 MSI Images), shows a fairly complex classification decision tree (figure 8 in the article), based on derived feature space resulting in 91% of correctly classified spectra.
The solution includes only band math functions for selecting a single band (B), the difference between two bands (S), and the ratio of two bands (R), and can be written in JavaScript as
Here the colours correspond to the colours from the article, apart from the colours for clear and shadow, which we made into natural (true) colour from red, green and blue bands.
Another interesting example deals with volcano eruptions (or fires). Visualisation like the one in figure 3 shows eruption very clearly, but the blue slopes of the Mount Etna distract the viewer.
Tweaking the same bands into
where red and green bands are overlaid with short-wave infrared bands 11 and 12 make much more pleasing visualisation, as shown in figure 4:
Additional tweaking the gamma for snow and more gradients in the lava:
finally results in eye-pleasing visualisation:
The third example is a bit more math oriented. We would like to visually interpret how the normalised density vegetation index (NDVI) is affected by the uncertainties in detector reflectances of the L1C products.
Since NDVI is defined as a ratio of difference over sum of bands 8 and 4 (near infrared and red):
the uncertainty propagation gives us the uncertainty of the index itself as
where ΔB4 and ΔB8 are uncertainties of red and near infrared bands respectively (reported by ESA to be 0.02 and 0.03). We left out the mixed part ΔB4B8 as if the two uncertainties were not correlated.
Implementing this in JavaScript function, together with some tweaking for nicer visualisation, we get
The resulting visualisation represents NDVI uncertainty stemming from red and near infrared bands. The legend shows how the colour darkness characterises the uncertainty: the darker the image the larger the error is.
It is clearly seen from the image how terrain affects the NDVI calculations — slopes and shadows carry much larger uncertainty with respect to flat areas. The image also illustrates how NDVI index has no meaning over water areas. The last image demonstrates the effect of cloud shadows on NDVI calculation.
Originally published at sentinel-hub.com.