How to Make the Perfect Time-Lapse of the Earth

A detailed guide covering various examples on making animations from satellite imagery

Matic Lubej
Sentinel Hub Blog
10 min readFeb 2, 2021

--

Agriculture fields of the southern Limpopo region, South Africa (source).

No matter what you do in life, there is one thing connecting us all — communication. To understand each other, we must know how to convey a message to the person (or an audience) in front of us, which points to emphasise, and which details to leave out. This is especially important when trying to influence important decisions in science, politics, etc.

One of the most efficient ways of interpreting complex scientific data is via visual media, such as graphs, charts, infographics, or — the focus of this post — animations.

Simple graph, simple message: more dogs == more awesome.

This post will guide you through making time-lapse animations from satellite images obtained via Sentinel Hub services, which make many different data sources available at your disposal. The focus will mostly be in the scope of Python and GIS-related tools.

A Jupyter notebook is also available to go hand-in-hand with this blog post, in case you want to follow in detail or if you want to try this approach yourself at some later point.

Let’s begin!

101: Fast and Cheap

Sometimes there is not enough time to wait for results; maybe the article you’re writing has a strict deadline, or you just want that one quick time-lapse of a volcano eruption to share on Twitter. In this case, we have the solution which requires no programming and no waiting on any processes to finish. Just your favourite web browser.

Surf over to the Sentinel Hub EO Browser, set the location and the time interval of your interest, select the data source, and that’s it! Now you need to click on the time-lapse button in the side-bar, potentially set the cloud-coverage filter, or manually remove some scenes and download the animation.

EO Browser time-lapse user interface (link).

Here’s what you get.

Construction of the Samuel De Champlain Bridge in Montreal (source).

The time-lapse is good, but not perfect. You see what is going on, but there are still some clouds in the image. The cloud-coverage slider in the time-lapse UI is not ideal, since the value represents the amount of clouds on the tile level, not on the image level, which means that you might be throwing useful data out of the window. In this case, we luckily still have plenty of images left after applying a tight threshold of 10 % on the tile-level cloud coverage, but this might not always be the case.

201: Removing the clouds

As soon as you want something more specific, there is no alternative but to get your hands dirty, because this most probably means working more directly with the data. One of the most convenient ways of doing this is using the sentinelhub-py and eo-learn Python libraries, both of which provide the API to do wonders with satellite data. You can read more about the approach on our Sentinel Hub Blog page, but here we will focus simply on removing the clouds.

A while ago we announced the addition of cloud masks to the Sentinel Hub services, which means that you can download pre-computed cloud masks. The masks were produced using the s2cloudless cloud detection algorithm, which was developed in-house and is publicly available.

By specifying a single parameter to the EOTask for downloading, you can already have cloud masks available along with your L1C or L2A data.

EOTask for downloading satellite data from Sentinel Hub in eo-learn.

Once the data is downloaded, the only thing left to do is to filter out the cloudy images (based on the actual cloud coverage) and create the animation. There are multiple ways for achieving the latter, but one of the most flexible is using the ffmpeg software, which also has Python API available. You just need to do something like this.

Code snippet for creating a high-quality timelapse.

The code above was inspired by this article, which nicely explains the inner workings of the ffmpeg library and allows you to create high-quality outputs. The script above produces a video and a colour-optimised .gif animation.

Same time-lapse as above, but obtained with cloud masks from the service.

While some clouds are still present, the final product is improved, and we didn’t break a sweat doing this. The cloud masks from the service should be suitable for most applications out there, but nothing prevents you from calculating your own in eo-learn if need be. The added value here is doing custom post-processing or pushing even further by calculating multi-temporal cloud masks, which are especially useful when there are uncommon objects that might get repeatedly falsely detected as clouds. By adding these EOTasks to your pipeline, you gain more control over the outcome.

301: The Bothersome Wiggle

By this point, you’re already good to go, but if you want to level up your time-lapse game, this section is for you. Clouds are the usual nuisance, and in most cases, it’s enough to remove them. Often, though, you will notice a “wiggle” effect, especially when working on smaller areas at high resolutions. What is this effect? Can it be removed?

When dealing with multi-temporal data, minor misalignments occur due to geometric correction errors, atmospheric turbulence, or instrument noise. In most studies, one must spatially align the data by applying co-registration algorithms. With this process, you can — to some degree — fix the misalignments, and obtain a more “stable” result. eo-learn already supports a few co-registration methods, so you can plug the task into your pipeline and run it. Here is the comparison before (left) and after (right) applying the co-registration algorithm.

A time lapse of the world’s tallest solar tower in Ashalim, Israel (source) before (left) and after (right) image alignment.
And similarly for the first time-lapse.

I don’t know about you, but when I look at the images on the right, I can finally breathe again.

401: #flex on Twitter

Now we’re getting somewhere, but to really wow someone, you have to aim bigger — and I mean that literally! Time-lapse animations of large areas are especially interesting since they offer a new perspective on our planet. For example, let’s create a time-lapse of the beautiful Okavango Delta in Botswana, Africa.

The first approach would be a straight-forward one — increase the bounding box area, drop the resolution down to a realistic value, and start downloading data. Downloading so much data at once is possible using the PREVIEW level, but at this scale, the pre-computed cloud masks are not available. You have to calculate them yourself, which means downloading extra bands.

In such cases, you can’t apply the cloud-coverage filter anymore, because, at this scale, the clouds are much more common. Additionally, you will notice that the downloaded area has frames with no-data parts. This is expected because the satellite acquires data in strips.

All the available data for the selected area, including clouds. The black parts represent areas outside of the satellite image acquisition range on invalid data due to some other reasons.

One solution to this would be to do simple mosaicking over long periods, in the sense of mashing together monthly data into a single, cloudless image. A more advanced approach is performing linear temporal interpolation, where you replace the values in the pixels where clouds were detected with the values estimated from neighbouring cloud-free points in time. You can read more about the process in our second land-cover classification blogpost.

A visual representation of temporal interpolation. The vertical axis represent the passage of time and the missing parts on the left represent invalid data, such as clouds. The result is the gap-free temporal data stack on the right.

After interpolation we are left with the best estimate of what the area would look like if there were no clouds throughout the entire time interval.

Time-lapse of the Okawango delta after cleaning out the clouds using linear temporal interpolation.

501: Divide and conquer

The above approach can only take you so far, as you will soon hit the absolute area size limit for a single request. It’s better to split the full area of interest into smaller parts, process them separately, and merge them at the very end. Such a process is much easier to control, lighter on the hardware resources, and enables processing at lower resolutions. And don’t forget, that last fact also brings back the ability to use the pre-computed cloud masks.

eo-learn already supports this and enables applying the same pipeline to all the chunks in parallel. The only extra work you need to do is stitching the images back together, but the result is well worth it. The best way to do this is to obtain .tiff files (because they hold information about the geolocation) and then spatially merge them and import them back as EOPatches to follow the same animation procedure as before.

Piece of gdal software used to export and merge a bunch of .tiff files and import them back as eopatches.

And the final result…

Same time-lapse as above, obtained with the split approach at a higher resolution. A temporal moving average (MA) filter was applied to smooth out high-frequency changes.

At this scale you will probably see some “striping” effects caused by the non-uniformity of the instrument’s spectral response, meaning that the obtained value for the same point in space might vary if observed under different angles. Fixing this requires either substantial knowledge about satellite data rectification or tedious work at the post-processing step, so it might be easier to learn to live with it.

502: Batch processing

An even better divide-and-conquer approach would be with the latest addition to Sentinel Hub services — Batch Processing. The main point of this functionality is to bring the ability to process large volumes of satellite data to the consumer. Do you want to operate a machine learning (ML) process? No problem. And you want to do it at a large scale? All good. Can all of this be achieved affordably? Yes!

It almost sounds too good to be true.

Here’s a crazy idea. What if you moved all the data-processing steps (data collection, cloud-masking, temporal interpolation, etc.) into the SH evalscript and have multi-temporal analysis-ready data in your hands as soon as the download finishes? Sounds cool? Well, good news, this was already performed in our batch approach to land-use/land-cover classification, so we can repeat the workflow in our attempt to make a time-lapse of a large area.

The output of the evalscript are multi-temporal, single-band .tiff images for each tile in the batch grid. To create the time-lapse, we need to reshape and merge them into single-timestamp, multi-band (RGB) files. Here’s another code snippet for doing this.

Code snippet for creating re-projected single-timestamp RGB .tiff files from multi-timestamp single-band .tiff files.

The described approach opens the door to do just about anything.
You can start small (click on the image for the full resolution video),

Yearly true-color timelapse of Madagascar for 2019.

or take it to the next level.

Yearly true-color timelapse of Africa for 2019.

Do you see where this is going? I’m thinking it, and I know you’re thinking it, so let’s stop pretending like it’s not going to happen and show it already.

I present to you our world.
(except the far north and the far south)

Yearly true-color timelapse of the world for 2019.

Decades of science and technology for putting the satellite into the orbit. Years of setting up the infrastructure to distribute the data and build the community. Constant image acquisition of data at the petabyte scale, high resolution, and frequent revisit over more than one year. Weeks to experiment with the data and adapt the process. Days to write up the results and the conclusions. All that work merged into a 2-second time-lapse that fits onto your smartphone.

And what does it communicate?
Well, the math is simple. If one picture says a thousand words, the time-lapse should ramble on for quite a while. I bet those words touch on the topics of life seen in the moving band of vegetation in Africa. And the orientation of Earth in space hinted through the amount of snow and ice in the north. Or even the abundance of iron in Australia’s deserts from their reddish tone…

Honestly, all of those words might not be enough to describe the beauty of our planet, but it should be more than enough to express that we should not take Earth’s beauty for granted.

Eye candy

The animations shown in this blog post are available on our Flickr channel, but for the sake of making visualizations, let’s have some more fun with this. For all y’all Antarctica fans out there, I used the Blue Marble image as a background where there is no data.

Yearly true color time-lapse of the world, using 2019 Sentinel-2 L2A data and the Blue Marble image as a static background on excluded areas.

Yearly true color time-lapse of the world, using 2019 Sentinel-2 L2A data and the Blue Marble image as a static background on excluded areas.

Now slap that flat Earth onto a ball and spin it!

Code based on the excellent tutorial from J. Castillo (source).

Try it for yourself at https://apps.sentinel-hub.com/digital_twin_sandbox.

We brought the Earth closer to you and did our part. Now it’s up to you to use this data and impress us with your usecases.

We’re really excited to see what you come up with.

TTThe project has received funding from European Union’s Horizon 2020 Research and Innovation Programme” under the Grant Agreement 101004112, Global Earth Monitor project.

--

--

Matic Lubej
Sentinel Hub Blog

Data Scientist from Slovenia with a Background in Particle Physics.