Land Cover Classification with eo-learn: Part 3

Pushing Beyond the Point of “Good Enough”

Matic Lubej
Feb 14, 2019 · 10 min read
Transition of an area from the winter to the summer season, composed with Sentinel-2 images. Hints of the discriminative power of the snow cover can be noticed, as confirmed in the previous blog post.

Foreword

These past few weeks must have been quite hard on you. We published the first and the second part of the land cover classification on the country scale using eo-learn. eo-learn is the open-source package for bridging the gap between Earth Observation (EO) and Machine Learning (ML), but in the provided Jupyter notebook we only provided example data and showed the results for a small percentage of the whole area of interest (AOI) — big whoop… no big deal, right? I know that seems mediocre at best, and above all, quite rude on our behalf. And all this time you were having trouble getting a good night’s sleep due to wondering how to use all this knowledge and take it to the next level.

All our Data are Belong to You!

Are you sitting down yet? Maybe leave the hot coffee on your desk for just a bit longer and listen to the best news that you will hear all day…

The data is stored on the AWS S3 Cloud Object Storage and can be downloaded via this link: http://eo-learn.sentinel-hub.com/

Link to AWS S3 Bucket: http://eo-learn.sentinel-hub.com/

EOExecute Order 66

Great, the data is being downloaded. While we wait for the data to download, let’s take a look at a nifty functionality of eo-learn that hasn’t been showcased yet — the EOExecutor class. This module handles the execution and monitoring of a workflow and allows for the use of multiprocessing in a very intuitive and carefree way. No more searching on Stack Overflow on how to parallelise your workflow properly or how to make the progress bar work with multiprocessing, EOExecutor takes care of both!

Dependency graph of the tasks in the workflow, provided by eo-learn.

Experimenting with the ML Pipeline

As promised, this blog post is meant to show you how to start exploring different pipelines with eo-learn using the data we provided. Below we prepared two experiments, where we study the effects of clouds and the effects of different choices of resampling after the temporal interpolation on the final result. Lastly, we also started working with Convolutional Neural Networks (CNNs) and wanted to compare the results of the two different approaches — the pixel-based decision trees and the convolutional deep learning algorithms — to perform land cover classification.

Playing with Clouds

Clouds are a nuisance in the world of EO, especially when working with machine learning algorithms, where you want to detect the clouds and remove them from your dataset in order to perform a temporal interpolation over the missing data. But how big of an improvement does this actually bring in? Is the procedure really worth it? Rußwurm and Körner in their paper Multi-Temporal Land Cover Classification with Sequential Recurrent Encoders even show that for deep learning the tedious procedure of cloud filtering might be completely unnecessary, since the classifier itself learns how to predict clouds.

Activation of the input (top) and the modulation (bottom) gate over the sequence of observations for a particular cell in the neural network. This cell has learned cloud masking and filtering, as input and modulation gates clearly show different activations on cloudy and non-cloudy observations. (Page 9 in https://www.researchgate.net/publication/322975904_Multi-Temporal_Land_Cover_Classification_with_Sequential_Recurrent_Encoders)
  • A2) without scene filtering, cloud mask taken into accounts,
  • A3) with scene filtering, cloud mask not taken into account,
  • A4) without scene filtering, cloud mask not taken into account.
A visual representation of a temporal stack of Sentinel-2 images over a randomly selected area. The transparent pixels on the left imply missing data due to cloud coverage. The stack in the centre represents the pixel values after cloudy scene filtering and temporal interpolation with cloud masking (case A4), while the stack on the right shows the case without cloudy scene filtering and no cloud masking performed during interpolation (case A1).
Results of overall accuracy and weighted F1 scores for different workflows with regards to cloud effects.

Effects of Different Choice of Temporal Resampling

The choice of temporal resampling after the interpolation is not obvious. On one hand, we want a relatively fine grid of sampled dates in order to not lose valuable data, but at some point, all available information is taken into account, so including more sampling dates does not improve the result further. On the other hand, we are constrained by the computing resources. Decreasing the interval step by a factor of 2 doubles the amount of time frames after the interpolation, and therefore increases the number of features that are used in the classifier learning. Is the improvement of the result in this case large enough to justify the increased use of computing resources? Check the results below!

  • B2) uniform resampling with an 8-day interval step,
  • B3) optimal “cherry-picked” dates, same amount of dates as in B2,
This plot shows the number of EOPatches, which contain image data for each day of the year 2017 (blue). The overlaid lines (red) represent the optimal dates for the resampling choice, which were based on the Sentinel-2 acquisitions for the given AOI in 2017.
Results of overall accuracy and weighted F1 scores for different workflows with regards to different resampling choices.

Deep Learning: Using a Convolutional Neural Network (CNN)

Deep learning methods have become state-of-the-art in many tasks in fields such as computer vision, language, and signal processing. This is due to their ability to extract patterns from complex high-dimensional input data. Classical ML methods (such as decision trees) have been used in many EO applications to analyse temporal series of satellite images. On the other hand, CNNs have been employed to analyse the spatial correlations between neighbouring observations, but mainly in single temporal scene applications. We wanted to investigate a deep learning architecture, which is capable of analysing the spatial as well as the temporal aspect of satellite imagery, simultaneously.

Architecture of the TFCN deep learning model.
Comparison of different predictions of land cover classification. True colour image (top left), ground-truth land cover reference map (top right), prediction with the LightGBM model (bottom left), and prediction with the U-Net model (bottom right).

Other Experiments

There are many more experiments that could still be done, but we can’t think of all of them, neither can we perform them. That’s where you come in! Show us what you can do with this dataset and help us improve the results!


The End!

Hopefully, you have enjoyed reading and learning about land cover classification with eo-learn in this blog post trilogy. We feel that we paved the way well enough for you to start exploring big data in EO on your own and can’t wait to see what comes out of it.

We really believe in the open-source community and feel that it’s crucial for pushing the boundaries of the knowledge frontier. Thanks so much for participating and contributing!

Link to Part 1: https://medium.com/sentinel-hub/land-cover-classification-with-eo-learn-part-1-2471e8098195


Sentinel Hub Blog

Stories from the next generation satellite imagery platform

Thanks to Devis Peressutti and Matej Aleksandrov

Matic Lubej

Written by

Data Scientist from Slovenia with a Background in Particle Physics.

Sentinel Hub Blog

Stories from the next generation satellite imagery platform

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade