Area Monitoring — Observation outlier detection

Domagoj Korais
Sentinel Hub Blog
Published in
5 min readOct 6, 2020

Is it possible to detect anomalous values in the time-series of remote sensing data?

This post is one of the series of blogs related to our work in Area Monitoring. We have decided to openly share our knowledge on this subject as we believe that discussion and comparison of approaches are required among all the groups involved in it. We would welcome any kind of feedback, ideas and lessons learned. For those willing to do it publicly, we are happy to host them at this place.

The content:

The problem

Appropriate data filtering is needed to detect ground level changes from satellite data. This is a challenging task, since our atmosphere is quite dynamic and top-of-atmosphere (TOA) values can change for many reasons, regardless of the changes at the ground level.

In the context of the Area Monitoring project, we are interested in detecting changes in features-of-interest (FOIs) triggered by agricultural activities.

Once the cloudy observations have been filtered out using our state-of-the-art cloud masks (produced with s2cloudless), what is left is a mixture of valid observations and a set of undetected anomalous observations.

The latter are mostly caused by:

  • cloud shadows
  • snow
  • haze

The goal of the observation outlier detection algorithm is to identify such anomalous observations and obtain high-quality data for calculating the markers.

Some examples of outliers:

Example of FOI with outlier due to snow presence (right image).
Example of FOI with outlier due to cloud shadow (middle image), and partial cloud shadow (right image).

The solution

The developed solution utilises a supervised machine learning (ML) approach, based on an LGBM model, which was trained on hand-labelled data of agricultural parcels over Slovenia, collected in 2019. The training dataset consists of 100260 observations, out of which 14336 are outliers.

Even though the model is trained and applied on super-pixels (aggregated pixel data from each FOI), it doesn’t take into account spatial or temporal context, so it’s possible to use it also on the pixel level, an approach that is not possible if the model needs to peek at adjacent pixels to make a prediction. Additionally, inference times when using such an approach are usually low, meaning that a vast amount of area can be analysed with a reasonable amount of resources.

The input features of the model are:

  • 13 Sentinel-2 spectral bands (443–2190 nm)
  • Normalised Difference Vegetation Index (NDVI)
  • Normalised bare soil index (NBSI), as defined in the bare-soil marker blog post.

The output is binary: a FOI is classified as an outlier if the pseudo-probability (output of the model) is above some threshold.

Results

The model is evaluated using a test set of FOIs unseen during the training phase.

Of all the observations detected as outliers, 83% are true outliers (precision of 0.83). Of all the true outliers, the model detects 58% of them as outliers (recall of 0.58). Those results are obtained when setting a detection threshold of 0.5 on the model pseudo-probabilities.

To better understand how the model is affected by setting a different detection threshold the Receiver Operating Characteristic (ROC) curve is used:

Changing the detection threshold changes the overall model performance. In this figure, it’s possible to evaluate model performance for three different threshold values.

The higher the detection threshold, the lower the number of false positives, and consequently higher the number of true positives.

Using the output of the model, we created a Sentinel Hub custom script that we can use in EO Browser to visualise the results. (Here you can find information on how to convert a trained model to custom script).

In the following images the meaning of the colours is the following:

  • red: pixels detected as clouds by the cloud detector provided by Sentinel Hub.
  • blue: pixels detected as outliers by the outlier detector, setting a threshold of 0.5.
An overview of Krško (Slovenia), taken on 2020–09–02. Click on the image to open it on EO Browser.

From this image we can make the following conclusions:

  • Model correctly identifies cloud shadows for the majority of clouds.
  • Water is also identified as an outlier. In the context of area monitoring, this does not pose a problem, as the model is applied to FOIs, and not to water bodies.
  • Clouds are identified as outliers, this happens because in the training dataset there are unfiltered clouds labelled as outliers, and the model learns to identify them.
  • Some forest regions are identified as outliers, this is probably due to darker vegetation which the model might mistake for cloud shadows. Again, similarly as with water, this is not a big problem as the model is applied to FOIs.

The following image allows to better see how the outlier mask follows the cloud shadow.

Detail from the previous image. Click on one of the images to open it in EO Browser.

The following figures show what is the outlier detector effect on a time-series where different kind of anomalous values are present. Setting a threshold of 0.5 means removing those observations for which the pseudo-probability (first figure) is higher than 0.5, that is the four highlighted cases.

Time series of detection pseudo-probability for an FOI, higher values correspond to correctly detected outliers.

The NDVI time-series for the same FOI shows that for some of the detected outliers there is a corresponding drop in the NDVI value, therefore removing those values is of paramount importance in order for the consequent markers to work properly.

Time series of NDVI values for an FOI

If you have found this article interesting and want to learn more about the capabilities of the outlier detector, you can check the results by yourself on EO Browser! Let us know how it performs in your region of interest.

Our research in this field is kindly supported, in grants and knowhow, by our cooperation in Horizon 2020 (Perceptive Sentinel, NIVA, Dione) and ESA projects (Sen4CAP). We’re grateful for GZC support in labelling the training dataset.

--

--