Land Cover Classification with eo-learn: Part 3
Pushing Beyond the Point of “Good Enough”
These past few weeks must have been quite hard on you. We published the first and the second part of the land cover classification on the country scale using
eo-learn is the open-source package for bridging the gap between Earth Observation (EO) and Machine Learning (ML), but in the provided Jupyter notebook we only provided example data and showed the results for a small percentage of the whole area of interest (AOI) — big whoop… no big deal, right? I know that seems mediocre at best, and above all, quite rude on our behalf. And all this time you were having trouble getting a good night’s sleep due to wondering how to use all this knowledge and take it to the next level.
Don’t worry… The third part of this blog series will provide you with the means to do just that! So go grab a cup of coffee, sit down, and get ready…
All our Data are Belong to You!
Are you sitting down yet? Maybe leave the hot coffee on your desk for just a bit longer and listen to the best news that you will hear all day…
Here at Sinergise, we have decided to share the dataset for the whole region of Slovenia for the year 2017. With all of you. For free. You can now get your hands on 200 GB of data in the form of about 300 EOPatches, each roughly the size of 1000 x 1000 pixels at 10 m resolution! You can read more about our EOPatch data format in one of our previous blog posts about
eo-learn, but essentially it’s a data container for spatio-temporal EO and non-EO data and their derivatives.
We haven’t been cheap with our data, either. Each EOPatch contains Sentinel-2 L1C images, the corresponding s2cloudless cloud masks, and the official land use data in the form of a raster map!
The data is stored on the AWS S3 Cloud Object Storage and can be downloaded via this link: http://eo-learn.sentinel-hub.com/
Each EOPatch is a container of EO and non-EO data. You can load an EOPatch in eo-learn with the following command:
You will obtain the EOPatch of the following structure:
It is possible then to access various EOPatch content via calls like:
EOExecute Order 66
Great, the data is being downloaded. While we wait for the data to download, let’s take a look at a nifty functionality of
eo-learn that hasn’t been showcased yet — the
EOExecutor class. This module handles the execution and monitoring of a workflow and allows for the use of multiprocessing in a very intuitive and carefree way. No more searching on Stack Overflow on how to parallelise your workflow properly or how to make the progress bar work with multiprocessing,
EOExecutor takes care of both!
Additionally, it handles any occurring errors and it can generate a summary of the execution process. The latter is crucial for making sure that your results are reproducible in the future, so you don’t lose precious company time tracing back your steps in order to find out which parameters you used to produce the results last Thursday at 9:42 AM after a whole night of drinking with friends (don’t drink and code!). It even produces a cool looking dependency graph of the workflow, which you can show to your boss!
Experimenting with the ML Pipeline
As promised, this blog post is meant to show you how to start exploring different pipelines with
eo-learn using the data we provided. Below we prepared two experiments, where we study the effects of clouds and the effects of different choices of resampling after the temporal interpolation on the final result. Lastly, we also started working with Convolutional Neural Networks (CNNs) and wanted to compare the results of the two different approaches — the pixel-based decision trees and the convolutional deep learning algorithms — to perform land cover classification.
Unfortunately, there is no simple “yes” or “no” answer that would generalise well for all cases when deciding on which experiments to perform. You can study the problem and make some assumptions in order to decide if the effort is worth it, but in the end, improving the pipeline always comes down to the most fundamental method of problem-solving. Trial and error.
Playing with Clouds
Clouds are a nuisance in the world of EO, especially when working with machine learning algorithms, where you want to detect the clouds and remove them from your dataset in order to perform a temporal interpolation over the missing data. But how big of an improvement does this actually bring in? Is the procedure really worth it? Rußwurm and Körner in their paper Multi-Temporal Land Cover Classification with Sequential Recurrent Encoders even show that for deep learning the tedious procedure of cloud filtering might be completely unnecessary, since the classifier itself learns how to predict clouds.
As a reminder to this specific part of data preparation (explained in detail in the previous blog post), let’s revise the cloud filtering procedure. After obtaining the Sentinel-2 image data we first perform the cloudy scene filtering. In this step, only the time frames with a ratio of non-cloudy pixels larger than 80 % are kept (thresholds might vary for different areas of interest). Secondly, temporal interpolation is performed to evaluate pixel values for arbitrary dates in the given interval. In this step, the cloud masks are taken into account, so that the values of cloudy pixels do not affect the interpolation.
Four possible variations of the pipeline present themselves:
- A1) with scene filtering, cloud mask taken into account,
- A2) without scene filtering, cloud mask taken into accounts,
- A3) with scene filtering, cloud mask not taken into account,
- A4) without scene filtering, cloud mask not taken into account.
We already executed the pipeline variation A1 and showed the results so we can compare it to the other pipeline variations. Preparing the different pipelines and training the model is pretty much straightforward at this point. You only need to make sure that you’re not comparing apples to oranges, meaning that in all four variations of the pipeline, you have to train and validate on the same collection of pixels from the same collection of EOPatches, while taking the same train/test splits into account!
The results are shown in the table below. You can see that in this particular application the clouds do not seem to affect the overall performance much! This might be due to the fact that the reference map quality is very high and the model is able to correctly determine the land cover label based on just a few observations. However, this might just be the case for this particular AOI and the results probably do not generalise well for all cases, so don’t discard your cloud detection step from your workflow based on just these results!
Effects of Different Choice of Temporal Resampling
The choice of temporal resampling after the interpolation is not obvious. On one hand, we want a relatively fine grid of sampled dates in order to not lose valuable data, but at some point, all available information is taken into account, so including more sampling dates does not improve the result further. On the other hand, we are constrained by the computing resources. Decreasing the interval step by a factor of 2 doubles the amount of time frames after the interpolation, and therefore increases the number of features that are used in the classifier learning. Is the improvement of the result in this case large enough to justify the increased use of computing resources? Check the results below!
For this experiment, we always use the pipeline variation A1 as the starting point. After the interpolation, we resample with the following variations:
- B1) uniform resampling with an 16-day interval step,
- B2) uniform resampling with an 8-day interval step,
- B3) optimal “cherry-picked” dates, same amount of dates as in B2,
where the selection in B3 is based on the most common dates for all EOPatches in the selected area of interest.
Looking at the table below, one can see that the results are similarly anticlimactic as in the case of the cloud effects experiment. Both, in B2 and B3 case, the amount of required computing resources approximately doubles, due to the increased number of training features, while the increase in overall accuracy (OA) and the weighted F1 score is only less than one per cent. Such improvements are too small to be visible in a proper application, so the 16-day rate resampling choice seems to be a good fit for the given scenario.
Deep Learning: Using a Convolutional Neural Network (CNN)
Deep learning methods have become state-of-the-art in many tasks in fields such as computer vision, language, and signal processing. This is due to their ability to extract patterns from complex high-dimensional input data. Classical ML methods (such as decision trees) have been used in many EO applications to analyse temporal series of satellite images. On the other hand, CNNs have been employed to analyse the spatial correlations between neighbouring observations, but mainly in single temporal scene applications. We wanted to investigate a deep learning architecture, which is capable of analysing the spatial as well as the temporal aspect of satellite imagery, simultaneously.
In order to do this, we used a Temporal Fully-Convolutional Network (TFCN), A.K.A. a temporal extension of a U-Net, implemented in TensorFlow. In more detail, the architecture exploits the spatio-temporal correlations to maximise the classification score, with the additional benefit of representing spatial relationships at different scales due to the encoding-decoding U-Net structure. Similarly, as in the case of Classical ML models, the output of the network results in a 2D label map, which is compared to the ground-truth labels.
The trained model was used to predict the labels on the test sample and the obtained results were then validated against the ground-truth. Overall accuracy of 84.4% and a weighted F1 score of 85.4% were achieved.
These results represent preliminary work on a prototype architecture, which was not optimised for the task at hand. Despite this, results are in line with some of the reported works in the field. Optimisation of the architecture (e.g.
number of features, depth of the network, number of convolutions) and of the hyper-parameters (e.g. learning rate, number of epochs, class weighting) is required to fully assess the potential of TFCNs. We are looking forward to continue to do some deep exploring (pun intended), and we even plan to share our code once it’s in a presentable format.
There are many more experiments that could still be done, but we can’t think of all of them, neither can we perform them. That’s where you come in! Show us what you can do with this dataset and help us improve the results!
For example, one of our outside colleagues is starting an Earth on AWS internship with us, where they will work on a project with land cover classification based on a temporal stack of single image predictions with CNNs. The idea is that, for certain land cover classes, such as artificial surface, water, or certain types of forest, the spatial context might be sufficient to identify them without needing to take into account the temporal information. We are excited to see where this idea takes us and a dedicated blog post is also planned!
Hopefully, you have enjoyed reading and learning about land cover classification with
eo-learnin this blog post trilogy. We feel that we paved the way well enough for you to start exploring big data in EO on your own and can’t wait to see what comes out of it.
We really believe in the open-source community and feel that it’s crucial for pushing the boundaries of the knowledge frontier. Thanks so much for participating and contributing!