SAR 201: An Introduction to Synthetic Aperture Radar, Part 2

Daniel Hogan
The DownLinQ
Published in
8 min readFeb 13, 2020

By Daniel Hogan (In-Q-Tel CosmiQ Works) and Jason Brown (Capella Space) with the CosmiQ Works and Capella teams.

Preface: SpaceNet LLC is a nonprofit organization dedicated to accelerating open source, artificial intelligence applied research for geospatial applications, specifically foundational mapping (i.e. building footprint & road network detection). SpaceNet is run in collaboration with CosmiQ Works, Maxar Technologies, Intel AI, Amazon Web Services (AWS), Capella Space, Topcoder, and IEEE GRSS.

Synthetic aperture radar (SAR) provides all-weather ground imaging, but SAR images are quite different from optical images. This post gives an overview of data analysis methods used with SAR and what can be learned from SAR imagery. This concludes a discussion begun in a previous post, which looked at how SAR images are produced. SAR data is featured in the soon-to-begin SpaceNet 6 Challenge.

SAR Data

As introduced in the previous post, SAR is a coherent imaging method. This enables techniques like interferometry, covered in the next section, to be achieved. Since SAR is a coherent imaging method the returns have two components, the intensity and the phase.

Figure 1. Intensity (left) and phase (right) components of SAR data (Courtesy: Capella)

The intensity component of SAR data is the part that looks like an image after SAR image formation processing has occurred. The radio waves in the SAR beam are aligned in space and time (coherent) upon transmission, or in other words they are “in phase.” The phase component of SAR data measures how much the radio waves have shifted “out of phase” after they interact with the scatterers on the surface. These phase components are useful, because phase differences between different channels or different collects can reveal information about the geometry and composition of the scene.

Single Look Complex

SAR data can be delivered as Single Look Complex (SLC) data where the intensity and phase components are represented as complex numbers for each pixel. Intensity and phase values can be subsequently computed from the complex numbers. Intensity and phase can be viewed as images, as seen in Figure 1, though only the intensity component is recognizable as an image. Additionally, SLC data is in the radar-image (slant) plane projection, so its pixels do not correspond to geo-coordinates.

Ground Range Detected

SAR data can also be delivered as Ground Range Detected (GRD) data, in which processing has been done to project the image onto the ground plane with geo-coordinates such as latitude and longitude, creating a terrain map.

Speckle Reduction

There are various methods to achieve speckle reduction and make a SAR image easier to interpret and analyse. Multi-looking is the process of splitting the radar beam into several narrower sub-beams. Each of these sub-beams is a “look” and is subject to speckle. However all the “looks” can be averaged and the amount of speckle in the final averaged image will be reduced. Another method of speckle reduction is spatial filtering, in which a “moving window” technique is used to calculate a weighted average of a group of pixels and replace the window’s center pixel with that value. This technique produces a smoothing effect in which the amount of speckle is reduced (Figure 2).

Figure 2. Comparison of SAR imagery before (left zoom detail) and after (right zoom detail) speckle filtering (Courtesy: Capella)

Multichannel Analysis Techniques

The methods seen so far are everything needed to generate a high-quality SAR image. But SAR becomes an even more powerful tool with multichannel techniques, where multiple SAR scans of the same area are used together to learn more than is possible with one image alone. Three main multichannel approaches are polarimetry, interferometry, and tomography.

Polarimetry

SAR polarimetry exploits the fact that radio waves, just like visible light waves, can be polarized. For example, a radar pulse might be horizontally- or vertically-polarized, and similarly the radar can be configured to measure only the horizontally- or vertically-polarized part of the echo. That gives up to four polarization combinations. A SAR instrument that toggles among these combinations as it travels will gain more information about the scene, because radar echoes from different types of objects have different relative strengths depending on polarization (Fig. 2). For example, if HH scattering is larger than VV scattering, then the scatterer is likely more horizontally-oriented than vertically-oriented. In polarimetry, even the difference in phase angle between the different polarizations can help distinguish features on the ground. SpaceNet 6 features quad-pol data, meaning all four linear polarizations were measured and are included in the dataset.

Figure 3: In this false-color SAR polarimetry image, each color channel (blue, green, or red) shows the reflection strength of a different combination of polarizations. (Courtesy of Sandia National Laboratories, Radar ISR)

Interferometry and Tomography

For completeness, we mention two other approaches that are different from the polarimetry being used in SpaceNet 6.

In SAR interferometry, two images are taken of the same area, and the phase angle difference between them is calculated for each point on the ground. If the distance from the object to the radar is different for the two images, the difference will show up as a phase shift. That applies whether the difference is due to a change in the radar’s position or due to motion of the ground itself. There’s just one catch — a full 360-degree change in phase (or any exact multiple thereof) is indistinguishable from having no phase shift at all. So very small distance changes can be measured directly, but figuring out larger changes requires more work to overcome that ambiguity. Interferometry has two main applications. In across-track interferometry, two passes are made with a SAR radar along slightly different paths. The phase differences can then be used to construct a map of surface height (called a digital elevation model, or DEM). In differential SAR interferometry, the two passes are made at different times and can be used to figure out how the surface height has changed in the interim due to, for example, ground subsidence.

Figure 4: This SAR interferometry image shows an elevation map of a volcano in Hawaii. The colors repeat in bands because of the 360-degree ambiguity. (Credit: NASA/JPL)

Finally, in SAR tomography, many passes (maybe twenty or so) are made over the same area along different paths. From that data, a 3D model of the area is constructed using the same mathematical techniques that underlie Computed Tomography (CT) scans.

Applications

Change Detection

Because of the periodic nature of satellite remote sensing, change detection is one of its fundamental applications. For example, the satellite imaging startup Capella Space plans to launch a constellation of 36 SAR satellites which will achieve a 1 hour revisit time. This will allow the collection of time series data on the order of hours to years, enabling every imaginable type of change detection. Shipping ports can be monitored for patterns of life and commerce, land cover/land use can be monitored over weeks and months, and the health of forests can be monitored year over year.

Figure 5. Detail of shipping activity at three different times during August 23rd, 2019 at Eemhaven, near Pernis, Netherlands. An image from 12:31pm was loaded into the red color gun, 1:56pm into the green color gun, and 3:11pm into the blue color gun (Courtesy: Capella).

Additional Applications

The uses of SAR imagery are numerous. SAR has been shown to detect oil spills from ships, perform ice monitoring, and detect “dark ships” (vessels with their identification signals turned off). SAR has utility in disaster management and response in helping create flood maps, assess deformation of the ground due to earthquakes, as well as mapping wildfire severity and recovery. SAR has been used to detect subsidence due to oil/water extraction and tunneling, as well as for monitoring deforestation and reforestation. Additionally SAR data has been used to generate digital elevation models and three-dimensional land use models. The use of high resolution SAR for foundational mapping purposes holds promise as well, particularly when combined with machine learning.

SAR and Machine Learning

Techniques for analyzing SAR data have been steadily advancing ever since the synthetic aperture concept was developed in the 1950s. One major advance, however, has taken place in just the past few years: the application of deep learning to SAR.

Deep Learning for SAR

Deep learning is supervised machine learning using a deep (many-layered) neural net. Analyzing SAR images with a neural net is not new, but deep neural nets are a more recent development, and their first application to SAR data was less than five years ago. A major task for SAR-related algorithms has long been object detection, which for historical reasons is called automatic target recognition (ATR) when working with SAR data. In perhaps the first published exploration of deep learning with SAR, a deep neural net was shown to work as well as other methods for the final classification stage of a typical ATR pipeline.

That work used the MSTAR dataset, a publicly-available collection of SAR images of different types of military vehicles. Subsequent work with the same dataset shows how much deep learning for SAR has advanced. Classification accuracy rose from 92.3% to 99.6% in three years.

At the same time, deep learning is being used with SAR data in an ever-growing variety of ways, including change detection and land cover classification. Researchers have even applied a generative adversarial network (GAN) to the task of image “translation” — taking a single-polarization SAR image and generating a simulated full-color optical image of the same area.

When it comes to identifying building footprints (outlines) in SAR, which is the topic of the upcoming SpaceNet 6 Challenge, various ideas have been pursued. Many of these methods do not use deep learning at all, instead reconstructing building footprints in an unsupervised way from carefully-selected features in the imagery. Efforts towards building footprint extraction with deep learning have so far only begun to scratch the surface. There have been studies of various algorithms for both polarimetric and single-image data. But as a recent paper observed with regard to currently-available SAR training data, “lack of such annotated datasets is one of the major issues” for SAR deep learning.

Closing Thoughts

This blog post series has covered a lot of ground in its crash course introduction to synthetic aperture radar. To read more about SAR theory and techniques, this website (less technical) or this article (more technical) are great places to go next. However, nothing beats actually getting ahold of SAR data and trying out deep learning on it yourself! This week, SpaceNet released a new open source multimodal dataset with SAR data from Capella Space, optical imagery from Maxar, and building footprint labels. This dataset will be the basis of the SpaceNet 6 Challenge.

In closing, SAR has tremendous potential, and that’s more true now than ever before. On the hardware side, the deployment of constellations of small SAR satellites could offer unprecedented access to the technology. On the software side, the application of deep learning has the potential to yield insights from data on a correspondingly large scale. For someone who’s only worked with optical imagery, the strange properties of SAR images can take some getting used to, but if you’ve made it through this blog post series then you know the key ideas already. Ultimately, it is those same unusual properties of SAR that make it so useful for providing a new perspective on the world.

--

--

Daniel Hogan
The DownLinQ

Daniel Hogan, PhD, is a senior data scientist at IQT Labs and was a member of CosmiQ Works.