Next mission — Automated detection of land changes

Sinergise
Planet Stories
Published in
6 min readJan 3, 2017

Sentinel Hub services are being used to observe and analyze Earth’s surface in various ways. For many of them it is required to find and evaluate changes between images of the same area taken in a different time periods. However the amount of imagery on Sentinel Hub available for users is massive. Looking for changes by hand can be time consuming and inaccurate. Therefore we are developing tools for automated detection of changes.

Complexity of the problem

There are many different types of changes that can appear on satellite images. Users are usually interested in specific land changes, vegetation changes and changes of man-made structures.

However images of the same area taken in different time periods may vary greatly due to temporal conditions. Most of all, they are affected by the weather and the atmosphere. Clouds and mists can partially or even completely cover the land surface. Also the amount and angle of sunlight, ground temperature and humidity can seriously affect the satellite image.

That kind of changes appear too often and should be ignored by the algorithm. Any detection of such change should be considered as false. Even though detecting them cannot be completely avoided the goal is to minimize the amount of false detections.

Construction of Yavuz Sultan Selim Bridge in Turkey through time: Besides growth of the bridge we can also see changes in weather, brightness, different positions of shades and river traffic.

Strategies

Change detection algorithms can use one of the two main strategies.

The first one is pixel-based strategy. It is about analysing changes for each pixel of the image separately and independently of the other pixels. This makes algorithms simpler, faster and more robust. However, changes that appear on pixels in the neighbourhood of the analysed pixel may also be an important indicator of whether the change on this pixel is relevant or not.

The second one is object-based strategy. Here we first classify each pixel into one of the land type groups (e.g. bare soil, vegetation, water, clouds, shadows, …). Then we join neighbouring pixels with the same classification into groups. Each group represents an object on the image. The algorithms for change detection will then analyse each object and try to determine how it changed by size, shape and position.

Both strategies have their advantages and disadvantages. It turns out that some combination of both strategies usually gives the best results.

Dubai coastline: example of land classification for an object-based strategy.

Basic algorithm

We have first focused on implementation of a pixel-based strategy algorithm. Let’s present a basic concept of the algorithm:

Suppose we have two greyscale images I1 and I2 of the same area taken at different times. We would like to detect and mark changes between them. For each pair of coordinates (x,y) we observe pixels with those coordinates on both images, marked with I1(x,y) and I2(x,y). Due to different weather and sunlight conditions pixels on one image may generally be brighter than the pixels on the other or they may have larger contrast. The brightness of an image can be well quantified by the mean value of all pixels, denoted with μ, and contrast can be quantified by their standard deviation of all pixels, denoted with σ. Therefore, to avoid detecting changes in brightness and contrast we transform image I2 into ​​Ĩ​2 with formula

With these corrections applied we create an image Id that is an absolute difference of I1 and Ĩ2:

After obtaining Id we would like to mark only pixels with high value as changes. For that we apply a series of thresholds to separate high and low value pixels. However the marginal value for separation cannot be the same for all images. It has to be adaptive and depend also on distribution of pixel values. Testing different adaptive thresholds it turned out the Kapur’s thresholding [1] works best in most of the cases. The thresholding method, published in 1985, is based on the concept of maximizing the (information-theoretic) entropy.

At the end of processing a segmentation algorithm is applied. It connects neighboring pixels that were marked as a change into connected sets. Pixels from sets with small area (a few pixels) probably appeared only because of inconsistencies in images. Therefore they are not considered as a change anymore.

Mischief Reef, South China Sea (images of before, after and with marked changes): New buildings were detected.
Aru mountains, Tibet: A landslide was detected.
Lake Oroville, California: Changes of water level were detected.
Aral Sea: Changes of water level were detected.

Upgrades of the algorithm

The basic algorithm was upgraded in following ways:

  • Satellites usually capture multiple images of the same location at the same time, each in a different spectral band. For example, Sentinel2 captures images in 13 different bands (B01, B02, …, B12 and B8A). Each band then represents one layer of the picture. Instead of analyzing grayscale images which have only 1 layer we can analyze images which have multiple layers.
    The upgraded algorithm first processes such images separately for each layer. Only before applying thresholds the layers are joined together into one layer image. A pixel on position (x,y) of joined image is calculated as a quadratic mean of pixels on position (x,y) of images for layer. The quadratic mean can be weighted if we want that some band have greater affect on determining the changes.
Lake Victoria, Kenya: Image composed of multiple layers.
  • Instead of having only 2 images I1 and I2 we can have 2 sets of images S1 and S2. There we would like to find changes between the majority of images from one set and the majority of images from another set. The advantage of considering a set of images instead of only one image is that some images might have clouds and other type of noise. But as long as for each pixel the majority of images does not contain clouds we can join images together into one cloudless image. We apply this expansion in our algorithm right after correcting mean and standard deviation for each image. We combine the images from each set into one in a way that each pixel on position (x,y) from a new image is a median value of pixels on position (x,y) of all images from the set.
Ladd Reef, South China Sea: A set of upper 4 images was compared against the lower left image. Changes are marked on the lower right image.
  • After previous algorithm upgrades the main reason for false detections remain clouds. Therefore, we try to apply some precalculated cloud masks for each image. Using existing land classification algorithms we can determine most of the pixels on the images that either contain a cloud or a shade of a cloud. That provides us with a cloud mask, which we then apply in our algorithm. The algorithm neglects all pixels of an image which cloud mask recognizes as a cloud.
South China Sea: right image represents a cloud mask for the left image.
  • The changes we are most interested in appear on images with high spatial resolution (e.g. 10–20 meters per pixel). But if we would like to find changes on areas that are too large to fit into one image and still use high spatial resolution, we can divide the area into smaller areas and calculate changes for each of them.
Kerch Strait Bridge, Russia: Area inside blue polygon was divided into 13 smaller areas and changes were calculated for each of them.

References

[1] J.N. Kapur et al. A new method for gray-level picture thresholding using the entropy of the histogram. Computer Vision, Graphics, and Image Processing. Volume 29, Issue 3, March 1985, pages 273–285.

Originally published at sentinel-hub.com.

--

--