Land Cover Classification with eo-learn: Part 3

Pushing Beyond the Point of “Good Enough”

Matic Lubej
Feb 14, 2019 · 10 min read
Transition of an area from the winter to the summer season, composed with Sentinel-2 images. Hints of the discriminative power of the snow cover can be noticed, as confirmed in the previous blog post.

Foreword

All our Data are Belong to You!

The data is stored on the AWS S3 Cloud Object Storage and can be downloaded via this link: http://eo-learn.sentinel-hub.com/

Link to AWS S3 Bucket: http://eo-learn.sentinel-hub.com/

EOExecute Order 66

Dependency graph of the tasks in the workflow, provided by eo-learn.

Experimenting with the ML Pipeline

Playing with Clouds

Activation of the input (top) and the modulation (bottom) gate over the sequence of observations for a particular cell in the neural network. This cell has learned cloud masking and filtering, as input and modulation gates clearly show different activations on cloudy and non-cloudy observations. (Page 9 in https://www.researchgate.net/publication/322975904_Multi-Temporal_Land_Cover_Classification_with_Sequential_Recurrent_Encoders)
A visual representation of a temporal stack of Sentinel-2 images over a randomly selected area. The transparent pixels on the left imply missing data due to cloud coverage. The stack in the centre represents the pixel values after cloudy scene filtering and temporal interpolation with cloud masking (case A4), while the stack on the right shows the case without cloudy scene filtering and no cloud masking performed during interpolation (case A1).
Results of overall accuracy and weighted F1 scores for different workflows with regards to cloud effects.

Effects of Different Choice of Temporal Resampling

This plot shows the number of EOPatches, which contain image data for each day of the year 2017 (blue). The overlaid lines (red) represent the optimal dates for the resampling choice, which were based on the Sentinel-2 acquisitions for the given AOI in 2017.
Results of overall accuracy and weighted F1 scores for different workflows with regards to different resampling choices.

Deep Learning: Using a Convolutional Neural Network (CNN)

Architecture of the TFCN deep learning model.
Comparison of different predictions of land cover classification. True colour image (top left), ground-truth land cover reference map (top right), prediction with the LightGBM model (bottom left), and prediction with the U-Net model (bottom right).

Other Experiments

The End!

Hopefully, you have enjoyed reading and learning about land cover classification with eo-learn in this blog post trilogy. We feel that we paved the way well enough for you to start exploring big data in EO on your own and can’t wait to see what comes out of it.

We really believe in the open-source community and feel that it’s crucial for pushing the boundaries of the knowledge frontier. Thanks so much for participating and contributing!

Sentinel Hub Blog

Stories from the next generation satellite imagery platform