Machine Learning Applications for Satellite Imagery

A collection of videos, articles and organizations in this fascinating space that may or may not improve your feel for machine learning

Jacob Younan
AI From Scratch
5 min readMay 3, 2017

--

Right off the bat, I’ll admit this seems like an unusual place for a beginner. Satellites and their images sound complicated (they are) and a more digestible topic likely involves identifying a cute animal correctly:

But assuming you’re past these explanations and are starting to get the hang of how deep learning techniques are applied to images, something a little more complex can’t hurt…or at least that’s how I’m justifying my last few hours of information binging.

Why Does This Topic Help A Beginner?

  1. Get yourself excited: Mostly this. The applications of object detection /machine vision in this domain are numerous and on a significant growth trajectory. It’s easier to learn when you’re fascinated by the outcomes.
  2. New, but familiar data set: We’re still talking about images, but at a new scale and with a boatload of fun extra considerations: frequency, resolution, obstructions, spectral bands, etc.
  3. Still CNNs generally: Continue to reinforce your general understanding of how CNNs work, because we’re still talking about images here. Get clearer on what’s happening at each layer and learn a little more about ImageNet models.
  4. Common challenges: You’ll hear a lot about common barriers to ML models working optimally including absence of strong labeled training data, high dimensionality of inputs and hardware capability limits. How practitioners are overcoming these issues is also informative.

Applications of Satellite Imagery

These three videos give you three layers of explanations on this topic. I’d recommend watching them sequentially, as they get progressively more detailed and build on each other a bit. If you can only watch one, skip to the last one delivered by Boris Babenko at Orbital Insight.

If you’d like to hear a lot more from Stefano, you can hear him interviewed at length on a TWiML episode from late March.

Enjoyed the clear transfer learning example here. Description on using nighttime images of light intensity as a proxy for development was clever.

Slide resolution and clunky video editing aside, Boris’ talk here is great — particularly the first 3/4 that effectively frame why this space is taking off (double pun?), and highlight practical challenges like resolution vs. frequency trade-offs, the limits of USGS’s Landsat images and algorithms, and the ubiquitous presence of clouds.

Who’s Participating Right Now?

I find it’s also helpful to read more about the organizations involved in this domain. Many of them speak relatively openly about what they do.

Already from the videos above, you’ve heard about a group at Stanford: the Sustainability and Artificial Intelligence Lab. You can also read more about Orbital Insight (update: they just raised another $50M) and the significant media coverage they’ve gotten over the past year on their website. The most recent piece they reference, called On Camera Forever, sparked my reading rabbit hole this evening:

Credit: ©Planet — Featured by The Outline “ The Dakota Access Pipeline. The image shows construction paths as well as protester camps along the pipeline.”

Companies are popping up or ramping up in the satellite market because it’s recently become possible to…

  • Rapidly increase the number of useful images that can be taken. Why?
    We can now get shoe box-sized, high(ish) resolution cameras called Nanosats/Microsats/Cubesats into orbit at affordable(ish) costs and it’s also becoming more cost effective to launch complex, high-res satellites
  • Rapidly analyze this volume of images and determine a valuable insight about what they’re capturing. Why? Presumably you know this answer.

The first driver is about hardware and the second about analytics. Most companies are focusing on only one side of this opportunity, while others are looking to vertically integrate. Unclear if more consolidation will happen within each side of the hardware/analytics divide or if more companies will choose to straddle.

An example of a pure-play ML image analysis company other than Orbital is Descartes Labs:

Example of a business doing both appears to be Spire:

There are several players large and small that manufacture, launch and operate satellite constellations; last week CBInsights posted a helpful space tech market profile that included this map:

On the topic of hardware, I’m going to sneak in one more application example recently shared by DigitalGlobe (perhaps not included in the CBInsights graphic due its ongoing merger with Canada’s MDA?). This is not an ML example, though I’m sure this type of object detection work has already occurred or soon will with the help of neural nets:

Credit: DigitalGlobe

Parting Thoughts

Consider the collection of resources above an introductory look at this topic. I’m far from well-informed, but reading about this domain helped cement some theoretical ML concepts for me.

Two last things:

  1. Appears we’re headed for a sharp increase in availability of high resolution, high frequency data. Hope this lowers barriers to groups like Stanford’s sustainability and AI lab getting access to improved open source sensing data. So much potential social impact to be derived from wide availability of these data sets.
  2. Space is even more borderless than the internet in some ways. We’re used to infrastructure or air space being regulated and controlled by a given country. Once in space, what will a given country’s satellite be permitted to capture beyond its borders? If you can capture an image, is it legal to? I’m sure some these questions are answered (I haven’t read much Space Law), but I’d guess there’s a lot of regulatory specificity and global cooperation that will be needed as this imagery becomes a commodity accessible to the many rather than the few.

--

--