Satish Dhawan Space Center, India ©2019, Planet Labs Inc. All Rights Reserved. (Courtesy: Robert Simmon’s awesome article on Launch Sites)

The Scoop on Planet Basemaps

Samapriya Roy
Planet Stories
11 min readJan 9, 2020

--

In conversation with Joe Kington, Senior Geospatial Engineer at Planet

Whether or not you’ve heard the term ‘basemap,’ you likely use one every day without a second thought. From Google Maps to Pokemon Go, many consumer applications leverage basemaps to help users understand where things are located in the world.

Still not sure? Take a look at Google Earth right now (the pro version is now free), and use the image slider to zoom into an area of interest. A seamless backdrop of images with color correction applied will load up instantly. While the imagery can come from different sources and the recency of the image depends on the location, the basemap helps users get geographic and contextual information to make decisions.

Google Earth Pro (with historical imagery slider)

Google Earth Engine and the Carnegie Mellon University Create Lab collaborated to generate a global video called the Google Earth Timelapse. Timelapse was created as a zoomable video for the last 34 years to get cloud-free composites from 1984 to 2018. This was made explorable by the Carnegie Mellon University Create Lab’s time machine library. Think of this as a time series of about 34 basemaps, one for each year, which highlights major changes across the planet using Landsat, or the longest-running Earth Observation satellite system.

Columbia Glacier Retreat, Alaska Google Time Lapse (Courtesy: Google Earth Engine & CMU Create Lab)

Around the same time that people started to experiment with deep time stacks of imagery, the obvious question became: What would you see if you could observe the whole planet once every month in a cloud-free basemap and at a higher spatial resolution than Landsat?

For context, the Timelapse project utilized about 15 million satellite images to create a single cloud-free composite each year. With the capability of generating monthly snapshots, Planet can create about 12 snapshots of the Earth’s landmass every year. The possibility of gaining finer temporal resolution owing to Planet’s near-daily global coverage of the Earth provided a unique opportunity to ask questions at scales that were unimaginable.

Planet Monthly Basemaps 2019 (Courtesy and Copyright: Planet Labs)

This interannual variability paired with the possibility of attaining even finer temporal resolution owing to Planet’s near-daily global coverage of the Earth provided a unique opportunity to ask questions at even finer scales that seemed improbable. Think about going even further from monthly to weekly basemaps.

Planet basemaps at weekly cadences (courtesy and copyright Planet Labs)

So what exactly are Planet Basemaps? How are they created? What insights can Planet Basemaps provide that individual scenes can’t? How do you choose which type of Basemap you need?

To get more in-depth, we needed an expert to answer some of the questions about Planet Basemaps. Today, we’ve brought you Joe Kington, Senior Geospatial Engineer working with the Pipeline team at Planet — to do just that.

Hi Joe, thank you for joining us in a conversation about Planet Basemaps. Let’s get started

What is your background and what is your current day to day role?

So I am a geologist by background and I have worked with marine geophysics and large-scale regional tectonics. Before this, I worked in the oil and gas industry looking at regional exploration and seismic interpretation along with software development. It turns out that in remote sensing, the time stack data is pretty analogous to 3D seismic data. Instead of a 3D volume, you have deep time series data and it was a transition that was quite easy to make on account of that.

Coming into Planet I was doing similar things, but with satellite data instead of seismic. In my current role, I work with the Data Pipeline team, which handles processing what the satellite captures into something a customer can use.

That seems very important? Can you give us an example of what the data pipeline team does?

We’re responsible for all of the work that happens after data is downloaded from a satellite. The satellite captures a raw sensor image (L0 image) that’s not usable without additional work. You have to do a lot of processing to turn it into what you can see in our catalog. For example, images you see in our catalog are made up of multiple L0 images that are pre-processed, composited together, orthorectified, and then projected into a standard coordinate system.

We also have to calculate metadata, such as cloud cover, and distribute it in a format that the customer can query. All of this has to be done around the clock for millions of images per day. Thankfully, we have a lot of support and an excellent team. I only play a very small role in all of this.

I am assuming this is your larger roles and the role the team plays could you tell us a little more about how basemaps comes into the picture?

Most of my day to day work deals with Planet Basemaps. What we’re trying to do is convert individual images into a form that’s consistent and ready-to-go for really large spatial scales. We try to create one complete and seamless picture of the Earth at consistent time intervals from the imagery that Planet collects. We also handle a lot of requests for specific regions and timeframes or for non-standard data processing.

Basemaps and terms used

I thought the next useful step Joe would be for us to do some quick FAQs about questions generally asked and how that impacts building pipelines and methods around basemap datasets.

So sometimes we come across multiple names for the same things. Our basic understanding as a user is that mosaics and basemaps for most cases are the same. Is that correct?

Yes, that is correct, we call the product basemaps but our API delivers it through an endpoint that is called mosaics.

That’s good to know, for those who are in the downloading side of things they often deal with “quads”. Apart from the terms basemaps and mosaics can you describe “quads”?

Quads are our way of splitting up the whole world. We cannot deal with processing and delivering the whole world as a single large image. For example, if you wanted to download all of the US for a single mosaic it would take up several TB. It’s simply impractical to work with large areas as a single file. The quads are 4096 by 4096 pixel chunks, which are easier to download. When you download a mosaic, quads are the squared-up pieces of data delivered to you.

Basemap quads example (Courtesy: Planet Developer Center)

Types of Basemaps and Use

Great thank you for getting that out of the way. So let me follow with unpacking basemaps for the audience

We seem to be producing visual basemaps, analytic basemaps, and normalized analytic basemaps. For those who are still getting acquainted with our types of basemaps, can you go a bit in-depth about what these different kinds of basemaps mean?

Visual basemaps are generated globally once for every month of the year and can be produced at varying frequencies if needed. They’re sourced from PlanetScope and RapidEye imagery and are 3-band, 8-bit imagery that’s processed to be seamless and very visually appealing. However, we don’t preserve any radiometric properties of the images — visual basemaps function as an image to look at and inspect.

Surface reflectance basemaps are 4-band, 16-bit imagery corrected to surface reflectance. The best scenes for the time interval are selected, combined, and cut to our global tile grid and these mosaics contain the NIR band apart from the RBG bands.

For our normalized surface reflectance basemaps, two additional processing steps are applied to the PlanetScope surface reflectance data discussed earlier:

  • Normalization to reduce scene-to-scene variability and seamline removal to minimize scene boundaries.
  • The combination of normalization and seamline removal produces a nearly seamless mosaic that still approximately honors the reflectance coefficients of the input data.
Surface Reflectance/Analytic Basemaps, Normalized Surface Reflectance/Analytic Basemaps and Visual Basemaps Philippines

So for the visual mosaics, it seems we apply some sort of color corrections whereas this is not the case for analytic and normalized analytic. Can you provide some details here?

We apply color correction to our visual basemaps and it might be helpful for users to understand that we do not use any histogram-based methods to do this. The equivalent methodology would be if you color correct each scene so the color is preserved. We have an internal machine learning model that generates a unique color curve for each scene that matches the scene to an a priori color target (these targets, for example, MODIS and Landsat vary monthly or seasonally). With normalization, you are reducing inter-image variability and all of these processes are followed by scene edge removal so you get these smooth looking mosaics.

Finally, are there some rules of thumbs about what kind of mosaics are usable for what kind of applications? For example are visual mosaics better for machine learning models compared to normalized analytic for more spectrally dependent techniques?

So it boils down to if you want to use the near-infrared bands. For visual color balancing, we sacrifice absolute radiometric information to produce more visually consistent output, where the Normalized Surface Reflectance basemaps preserve radiometric values.

Let me ask a few of the most commonly asked questions we get about basemaps, I think this will give our readers some relevant insights.

Sounds good

A question that frequently comes up is our imagery is supplied to the users at 3m resolution but our mosaic quads seem to be at 4.77 m or varying. Why is that?

The 4.77 m spatial resolution is not really 4.77 m everywhere; it varies by latitude. It is only 4.77 m at the equator; everywhere else it’s a smaller distance. For example, if you go and download a quad in London the cell size would be 4.77 units. However, at that latitude, the 4.77 unit distance corresponds to ~3.0 meters on the ground or in a local projection such as UTM. In essence, it doesn’t matter what you are seeing as a quad cell size because the native resolution is determined after the data is re-projected at a user’s side.

So you are not using resampled data but raw data when generating mosaics?

It depends on the type of mosaics we are looking at, for the Surface Reflectance Basemaps we are looking at images that have been processed and resampled and reprojected, whereas for the visual basemaps we often stick to the L0 data along with Rational Polynomial Coefficients (used to often geo-rectify the data) and perform our own calculations there to produce the mosaics.

We are not doing any pixel operations in the stack while producing basemaps do we? If not, why is that?

That’s a common technique for low spatial resolution data: make a basemap from the mean or some percentile of all pixels at a location. However, we can’t use it for a couple of reasons.

There are so many scenes at each location that we can’t easily process them all. Instead, we rank the scenes and only use the sharpest and most cloud-free scenes for each area.

This produces high-quality results. Because of the resolution, we’re working at, things like building and tree shadows are easily visible. These move from image to image and result in “halos” around buildings/etc with pixel-stack-based mosaicking methods. Similarly, the position uncertainty in our data is larger than our pixels, so pixel-stack-based methods often result in doubled features.

Follow up to that if pixel-based operations are not performed do you still perform cloud masking, and is that on an image level instead of a pixel level at this point?

Surface Reflectance Basemaps will have cloud masks turned on. So mostly anything tagged as cloud using the UDM2 or UDM assets allows us to see if an image has clouds, remove the cloudy image from the stack of images used for the mosaic, and then move into the next image in the stack.

Going back to the mosaics again can you tell us what is the Alpha band that comes along with both the visual and the analytic products when you download the quads?

The alpha band is a representation of where we have data or not. It doesn’t have any spectral value. Zero is a real value in our imagery, so the alpha band can be used as a mask to see where there is data versus no data. Remove it before you run any models on the data, the band’s only purpose is to be a mask. 255 is Data, and 0 means no data.

What is the horizontal position accuracy of Planet Basemaps?

10 m RMSE at 90th percentile(90% of images at less than 10m abs geolocation accuracy)

Citation and Resources

It seems that the algorithms and processes used to create these basemaps is constantly improving, moving in time. Before we even go into the details, when people think about mosaics and the underlying algorithms are there ways when they can cite mosaics specifically?

While we are working on a citable white paper, here are some talks that are easier to cite. We are working on getting a white paper on normalization out soon and we presented a poster at the American Geophysical Union (AGU) Annual meeting for a broad scale AGU talk.

Here are some additional resources

Public Talks:

  • Kelsey Jordahl, Director Pipeline: Mosaicking the Earth Every day
  • Amit Kapadia, Staff Software Engineer: Global 5 Meter Resolution Time-series Mosaics
  • Joe Kington, Senior Geospatial Engineer: Visual normalization (analytic is a touch different)

Thank you so much for your time, Joe! We look forward to hearing from our users. If you have a question that wasn’t answered here today, head over to Planet Community forum or reach out to our team directly! If you haven’t already, check out the Planet Basemaps introductory webinar.

Joe Kington has a Ph.D. in geoscience and is an engineer on the data pipeline team at Planet. He primarily works on the basemap toolchain and supporting partners and customers using basemaps.

Samapriya Roy has a Ph.D. in remote sensing and geospatial applications. At Planet Sam’s role and responsibility of customer and researcher engagement and as a Senior Solutions engineer allows him to work closely with both roles and develop engagement and utilization strategies for users with Planet’s data.

Thanks to Jenna Mukuno, Product Marketing Manager at Planet for her valuable insights and edits.

--

--

Samapriya Roy
Planet Stories

Remote sensing applications, large scale data processing and management, API applications along with network analysis and geostatistical methods