How to Make the Perfect Time-Lapse of the Earth

A detailed guide covering various examples on making animations from satellite imagery

Matic Lubej
Feb 2 · 10 min read
Agriculture fields of the southern Limpopo region, South Africa (source).

No matter what you do in life, there is one thing connecting us all — communication. To understand each other, we must know how to convey a message to the person (or an audience) in front of us, which points to emphasise, and which details to leave out. This is especially important when trying to influence important decisions in science, politics, etc.

One of the most efficient ways of interpreting complex scientific data is via visual media, such as graphs, charts, infographics, or — the focus of this post — animations.

Simple graph, simple message: more dogs == more awesome.

This post will guide you through making time-lapse animations from satellite images obtained via Sentinel Hub services, which make many different data sources available at your disposal. The focus will mostly be in the scope of Python and GIS-related tools.

A Jupyter notebook is also available to go hand-in-hand with this blog post, in case you want to follow in detail or if you want to try this approach yourself at some later point.

Let’s begin!

101: Fast and Cheap

Surf over to the Sentinel Hub EO Browser, set the location and the time interval of your interest, select the data source, and that’s it! Now you need to click on the time-lapse button in the side-bar, potentially set the cloud-coverage filter, or manually remove some scenes and download the animation.

EO Browser time-lapse user interface (link).

Here’s what you get.

Construction of the Samuel De Champlain Bridge in Montreal (source).

The time-lapse is good, but not perfect. You see what is going on, but there are still some clouds in the image. The cloud-coverage slider in the time-lapse UI is not ideal, since the value represents the amount of clouds on the tile level, not on the image level, which means that you might be throwing useful data out of the window. In this case, we luckily still have plenty of images left after applying a tight threshold of 10 % on the tile-level cloud coverage, but this might not always be the case.

201: Removing the clouds

A while ago we announced the addition of cloud masks to the Sentinel Hub services, which means that you can download pre-computed cloud masks. The masks were produced using the s2cloudless cloud detection algorithm, which was developed in-house and is publicly available.

By specifying a single parameter to the EOTask for downloading, you can already have cloud masks available along with your L1C or L2A data.

EOTask for downloading satellite data from Sentinel Hub in eo-learn.

Once the data is downloaded, the only thing left to do is to filter out the cloudy images (based on the actual cloud coverage) and create the animation. There are multiple ways for achieving the latter, but one of the most flexible is using the ffmpeg software, which also has Python API available. You just need to do something like this.

Code snippet for creating a high-quality timelapse.

The code above was inspired by this article, which nicely explains the inner workings of the ffmpeg library and allows you to create high-quality outputs. The script above produces a video and a colour-optimised .gif animation.

Same time-lapse as above, but obtained with cloud masks from the service.

While some clouds are still present, the final product is improved, and we didn’t break a sweat doing this. The cloud masks from the service should be suitable for most applications out there, but nothing prevents you from calculating your own in eo-learn if need be. The added value here is doing custom post-processing or pushing even further by calculating multi-temporal cloud masks, which are especially useful when there are uncommon objects that might get repeatedly falsely detected as clouds. By adding these EOTasks to your pipeline, you gain more control over the outcome.

301: The Bothersome Wiggle

When dealing with multi-temporal data, minor misalignments occur due to geometric correction errors, atmospheric turbulence, or instrument noise. In most studies, one must spatially align the data by applying co-registration algorithms. With this process, you can — to some degree — fix the misalignments, and obtain a more “stable” result. eo-learn already supports a few co-registration methods, so you can plug the task into your pipeline and run it. Here is the comparison before (left) and after (right) applying the co-registration algorithm.

A time lapse of the world’s tallest solar tower in Ashalim, Israel (source) before (left) and after (right) image alignment.
And similarly for the first time-lapse.

I don’t know about you, but when I look at the images on the right, I can finally breathe again.

401: #flex on Twitter

The first approach would be a straight-forward one — increase the bounding box area, drop the resolution down to a realistic value, and start downloading data. Downloading so much data at once is possible using the PREVIEW level, but at this scale, the pre-computed cloud masks are not available. You have to calculate them yourself, which means downloading extra bands.

In such cases, you can’t apply the cloud-coverage filter anymore, because, at this scale, the clouds are much more common. Additionally, you will notice that the downloaded area has frames with no-data parts. This is expected because the satellite acquires data in strips.

All the available data for the selected area, including clouds. The black parts represent areas outside of the satellite image acquisition range on invalid data due to some other reasons.

One solution to this would be to do simple mosaicking over long periods, in the sense of mashing together monthly data into a single, cloudless image. A more advanced approach is performing linear temporal interpolation, where you replace the values in the pixels where clouds were detected with the values estimated from neighbouring cloud-free points in time. You can read more about the process in our second land-cover classification blogpost.

A visual representation of temporal interpolation. The vertical axis represent the passage of time and the missing parts on the left represent invalid data, such as clouds. The result is the gap-free temporal data stack on the right.

After interpolation we are left with the best estimate of what the area would look like if there were no clouds throughout the entire time interval.

Time-lapse of the Okawango delta after cleaning out the clouds using linear temporal interpolation.

501: Divide and conquer

eo-learn already supports this and enables applying the same pipeline to all the chunks in parallel. The only extra work you need to do is stitching the images back together, but the result is well worth it. The best way to do this is to obtain .tiff files (because they hold information about the geolocation) and then spatially merge them and import them back as EOPatches to follow the same animation procedure as before.

Piece of gdal software used to export and merge a bunch of .tiff files and import them back as eopatches.

And the final result…

Same time-lapse as above, obtained with the split approach at a higher resolution. A temporal moving average (MA) filter was applied to smooth out high-frequency changes.

At this scale you will probably see some “striping” effects caused by the non-uniformity of the instrument’s spectral response, meaning that the obtained value for the same point in space might vary if observed under different angles. Fixing this requires either substantial knowledge about satellite data rectification or tedious work at the post-processing step, so it might be easier to learn to live with it.

502: Batch processing

It almost sounds too good to be true.

Here’s a crazy idea. What if you moved all the data-processing steps (data collection, cloud-masking, temporal interpolation, etc.) into the SH evalscript and have multi-temporal analysis-ready data in your hands as soon as the download finishes? Sounds cool? Well, good news, this was already performed in our batch approach to land-use/land-cover classification, so we can repeat the workflow in our attempt to make a time-lapse of a large area.

The output of the evalscript are multi-temporal, single-band .tiff images for each tile in the batch grid. To create the time-lapse, we need to reshape and merge them into single-timestamp, multi-band (RGB) files. Here’s another code snippet for doing this.

Code snippet for creating re-projected single-timestamp RGB .tiff files from multi-timestamp single-band .tiff files.

The described approach opens the door to do just about anything.
You can start small (click on the image for the full resolution video),

Yearly true-color timelapse of Madagascar for 2019.

or take it to the next level.

Yearly true-color timelapse of Africa for 2019.

Do you see where this is going? I’m thinking it, and I know you’re thinking it, so let’s stop pretending like it’s not going to happen and show it already.

I present to you our world.
(except the far north and the far south)

Yearly true-color timelapse of the world for 2019.

Decades of science and technology for putting the satellite into the orbit. Years of setting up the infrastructure to distribute the data and build the community. Constant image acquisition of data at the petabyte scale, high resolution, and frequent revisit over more than one year. Weeks to experiment with the data and adapt the process. Days to write up the results and the conclusions. All that work merged into a 2-second time-lapse that fits onto your smartphone.

And what does it communicate?
Well, the math is simple. If one picture says a thousand words, the time-lapse should ramble on for quite a while. I bet those words touch on the topics of life seen in the moving band of vegetation in Africa. And the orientation of Earth in space hinted through the amount of snow and ice in the north. Or even the abundance of iron in Australia’s deserts from their reddish tone…

Honestly, all of those words might not be enough to describe the beauty of our planet, but it should be more than enough to express that we should not take Earth’s beauty for granted.

Eye candy

Yearly true color time-lapse of the world, using 2019 Sentinel-2 L2A data and the Blue Marble image as a static background on excluded areas.

Yearly true color time-lapse of the world, using 2019 Sentinel-2 L2A data and the Blue Marble image as a static background on excluded areas.

Now slap that flat Earth onto a ball and spin it!

Code based on the excellent tutorial from J. Castillo (source).

Try it for yourself at https://apps.sentinel-hub.com/digital_twin_sandbox.

We brought the Earth closer to you and did our part. Now it’s up to you to use this data and impress us with your usecases.

We’re really excited to see what you come up with.

Sentinel Hub Blog

Stories from the next generation satellite imagery platform

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store