SAR and GNSS, monitoring Earth from space

Toni Del Hoyo
Worldsensing TechBlog
13 min readJan 28, 2019

Introduction

At Worldsensing we develop sensors to measure infrastructures such as construction sites, mines, bridges and dams (Loadsensing) and seismic data (Spidernano). We also have mobility-related products, like parking occupancy sensors (Fastprk) and traffic flow monitoring (Bitcarrier). And the tools to visualize all this data (OneMind).

Synthetic Aperture Radar and Global Navigation Satellite Systems are two space technologies that can be applied to the fields of Geo Information Sciences and the automotive sector in a complementary way to Worldsensing’s.

In this post I am going to explain a bit the theory behind this two space technologies.

SAR

Synthetic Aperture Radar or SAR is a technology that can generate images from a moving platform, such as a satellite, regardless of light and clouds. How? A SAR sensor is an active system, meaning it doesn’t simply wait for light to reach the sensor (like a photo camera does), but rather carries an “illuminator” with it, similarly to how a flashlight allows you to take pictures in the dark. Moreover, it uses microwaves, that have wavelengths much larger than visible light (centimeters vs nanometers). Because of that, SAR signals can penetrate through clouds and semi-transparent materials (ice, snow, vegetation, …). As a rule of thumb, you can think that a electromagnetic waves can only be perturbed by things larger in size to its wavelength. Light has a very small wavelength that is perturbed by the tiny particles in the clouds, and because of that we can’t see through them. SAR microwaves, on the other hand, are able to reach earth’s surface from the sky.

A Synthetic Aperture Radar can be mounted on anything that moves (if there is no motion, there is no 2D resolution, and you end up with the “boring” 1-dimensional images generated by classic radars, such as the ones for traffic control in airports. Typically, SAR sensors are mounted on satellites or airplanes. But recently they are becoming smaller and some people were able to mount them on drones. Another option is to have “ground-based” radars, which can be used for continuous monitoring of specific areas of interest.

From top to bottom: classical radar, European Space Agencie’s Sentinel 1 satellite, NASA’s UAVSAR, Metasensing’s ground-based SAR, Polytechnic University of Catalonia drone system, DYI SAR on a bike

Types

There are different types of SAR. For instance, if the radar transmits and/or receives in 2 different polarizations (horizontal and vertical) instead of just one, the transmitted and received signals in each polarization can be exploited to retrieve more information and (more important!) to have false-color images instead of just black and white data. This is called Polarimetry, and can be used for precision farming, or for water bodies monitoring in floods, among others.

SAR image of the city of Barcelona taken on 29th of September 2018 by the Sentinel 1 satellite. False color representation, red = vertical receive from vertical transmit, green = 2 * vertical receive from horizontal transmit and blue = vertical receive from vertical transmit / vertical receive from horizontal transmit

Another option is to acquire images from slightly different angles, in a technique called interferometry. Similarly to what the two human eyes do, providing 3D vision, two radars also provide depth information. An example of this was the Shuttle Radar Topography Mission in the year 2000, which consisted in mounting an “arm” on NASA’s shuttle vehicle. Both the shuttle and the arm’s end carried radars, and orbited earth for several days. The information retrieved helped generate a topographic map of Earth, which is still used to this day by e.g Google Maps to tell you the elevation profile of a hiking route.

Left: artist’s representation of the SRTM mission and right: elevation profile for a bike path in Barcelona from Google Maps.

Combining multiple interferometric sets of images acquired at different times is called Differential Interferometry, and can be used to extract information about the deformations suffered by objects over time. Examples of this can be subsidence studies in urban areas, stability in open pit mines and building’s thermal expansion and contraction, volcanoes and earthquakes, among many others.

Finally, if many images of the same area are taken from different angles, Tomographic techniques can be applied. This is similar to what Medical Resonance Imaging does, sending electromagnetic waves through your body and collecting their reflections and refractions, in order to generate images from “inside” your body. SAR Tomography can be used to “see” inside a forest or a glacier by sending signals to it from a satellite. This can be used, for instance, to study the amount of biomass on Earth or the inner ice layers in the North Pole.

Left: MRI scan machine, right-top: aerial Polarimetric SAR image generated by German Aerospace Centre’s airborne sensor, right-bottom: tomographic representation of the forest structure corresponding to the yellow line on the top figure generated by combining several SAR images

This can get much more complicated if you combine the following techniques: you can have Differential Tomography, Polarimetric Interferometry, Polarimetric Differential Tomography, Holography… but let’s better leave it here for the moment!

Trends

Like with many other technologies, the origins of SAR were military, used to locate targets day and night regardless of weather conditions. But much more other peaceful applications have appeared in the last years! On the one hand, the reduction in price and size of the sensors has allowed new companies to enter the market. Startups like the Finnish ICEYE have already successfully launched a small satellite the size of a washing machine with a radar on it. This is very disruptive, since classical SAR satellites used to weight more than 1000 kg and only big agencies like NASA and the German Aerospace Center could afford to launch them. Capella Space, from the US, is another startup planning to launch SAR satellites soon. Their claim is: if half the time the Earth is dark, and approximately half the time it is cloudy somewhere, with classical optical satellites you are only able to observe the 25% of the planet, while with SAR you can see the 100%! It is difficult to fight this argument…

Another disruption in the sector have been European Space Agency’s Sentinel 1A and 1B satellites. This pair of spacecrafts are part of the Copernicus programme, “Europe’s Eyes in Space”. And they provide frequent SAR images that can be downloaded for free! This has impacted greatly the ecosystem of companies working with earth observation.

Other cool radar stuff

Since 2015, Google’s Advanced Technology and Projects group, a technology incubator within the company, has been developing a radar to be used for gesture recognition. This seems to take care of the interaction limitations on small devices such as smartwatches. If this will be a new way of communicating with smart devices, only time will tell!

Soli functioning
Soli in a watch

Tools

GNSS

GPS is the most widely known satellite navigation system. It was the first one to appear and was developed by the the United States Air Force. But it is not the only one! Others have developed other satellite navigation systems, such as the Russian Glonass, the European Galileo and the Chinese Beidou. Therefore, the correct term to refer to all this systems is GNSS, which stands for Global Navigation Satellite Systems.

How it works

Let’s first think of sound-houses instead of satellites. This will be a kind of light-house, which broadcasts some information via acoustic waves. Using them, we will be able to locate ourselves on a map (this example is based on the one in the book GNSS Data Processing from UPC).

What we need

  • A map
  • Two “sound-houses”, in L1 and L2 in well-known coordinates that periodically broadcast their local time
  • A clock synchronized with the clocks in the sound-houses
  • The speed of sound: 343 m/s
  • 9:00 AM: lighthouse L1 sends the message “I am L1, it is 9:00 AM”.
  • At the same time, lighthouse L2 sends the message “I am L2, it is 9:00 AM”.
  • At 9:06 the first message reaches us → we are somewhere in the circle of 130 km centered in L1
  • One minute later, the second message reaches us → we are at 140 km of L2

We can conclude that we are in the intersection of the two circles, either in P1 or P2. To solve the ambiguity we need… yet another sound-house! If we place L3 nearby the existing L1 or L2, it won’t give us much additional information. But if we place it somewhere like Barcelona, the ambiguity is resolved and we can conclude that we are in the intersection of the 3 circles, in the city of Palma de Mallorca :)

GNSS works in a similar way: instead of sound-houses it uses satellites and instead of acoustic waves, radio signals. But the principle remains the same. Another difference is that using GNSS you can position yourself on a 3D map (longitude, latitude and height), not just 2D. This means an additional satellite will be needed to solve the system of equations.

Basic GNSS processing block diagram. The electromagnetic waves transmitted by the satellites reach the antenna after traveling more than 20.000 km. Their power is very weak, well below the electromagnetic noise level. They are converted into an electrical signal and de-modulated. After that, the “raw data” can be read: pseudo-ranges are the apparent distances between each satellite and the receiver, the phase is a very precise (but ambiguous) measurement of the electromagnetic wave, C/N0 the carrier-to-noise ratio, which relates to the received power, and Doppler a measurement of the relative motion between receiver and satellite. All this information is provided to a navigation engine, which solves a system of equations to retrieve information about latitude, longitude, height and time.

Precision

Like everything in life, GNSS has an error. Whether this error is few millimeters or several meters will depend on the following factors:

Single- vs multi-frequency

GNSS satellites transmit their information via multiple frequency bands, similarly to how FM radio antennas broadcast the different stations in multiple frequency slots, and then the user can tune one or another. Being able to “listen” to multiple GNSS frequencies at the same time improves a lot the accuracy of the position retrieved. Basically, it allows you to cancel-out phenomena such as the non-constant delay induced by the ionosphere. This delay depends on the signal’s frequency, and therefore comparing the signal received in the different bands, it can be estimated and thus removed. But of course, it comes at a cost! Receivers tracking simultaneously multiple frequencies are more complex and expensive.

All GNSS frequency bands for the different satellite constellations. Most receivers track the bands around 1575 MHz. More sophisticated receivers are also able to track lower frequencies. Source: http://www.gage.upc.edu/gnss_book

Antenna size

In GNSS, size matters. The GNSS antenna on your smartphone needs to be less than 10 cm² to fit in it, whereas a high end GNSS receiver can use antennas as big as a soccer ball. Larger antennas are able to receive signals from more satellites, and to retrieve better information from them because they mitigate interferences. But again, they are more expensive, and can cost up to several hundred euros.

Left: GNSS antenna on phone from an article in GPS World magazine, right: Leica GNSS antenna to be used for differential positioning

Multipath

Unfortunately, GNSS signals don’t always follow a straight line between the satellite and your receiver. Anything around you can scatter the signals, be it a bird flying around or a skyscraper. Like this, the signal’s travel time is longer, and your receiver will believe the satellite is more far away than it is.

Multipath representation from Navipedia

Ionosphere, troposphere, …

GNSS signals are carried by electromagnetic waves at the speed of light (300.000 km/s). If a satellite is 21.000 km above our head, its signal should reach our receiver in 21.000/300.000=70ms. But in reality, the signal will always arrive late. And this is because it is going through the different layers of the atmosphere, which affect its propagation speed. If at least the delay introduced was constant… but in reality, it is not, and depends on where you are on Earth, and what time of the day it is. That’s why we need space weather, which similarly to weather forecasts on earth, study and predict the status of the atmosphere at a given time.

Map generated by Bern University showing the Total Electron Content in the Ionosphere for a given day every 2 hours. The highest activity happens around the Equator.

Clock

As we learned from action movies, synchronizing the watches is very important. And this applies to GNSS as well. Since the whole system is based on computing the time difference between the signal reception time on the receiver and its transmission time from the satellite, your clock and the satellite’s have to be very well synchronized. The satellite carries an atomic clock, and so if someone is late, probably you are the one to blame!

Orbits

Another assumption in GNSS is that we know where the satellite was when it transmitted the signal. But again, this is far from being 100% correct. The satellites are constantly broadcasting orbital information, which is used by the receivers to estimate where the satellites are at any given time. In order to have more accurate orbital information, we would need a second GNSS constellation flying above the GNSS satellites. And then a third one to provide accurate positioning to the second one, and then… Another option is to wait some days for more precise orbital information. NASA and other research institutions re-compute the orbits of GNSS satellites and upload this information to FTP servers. This improves greatly the accuracy, but is only suitable for those applications that don’t require real-time positioning.

Visibility, antenna orientation

Can I use GNSS indoors? Noooo! This can be explained similarly to what happens when you are talking on your phone, go inside a tunnel and signal is lost due to the received messages being too weak after going through the tunnel walls. GNSS signals are so weak already (after all, they have travelled a very long distance), that simply going through a building’s walls or windows makes them unusable. Sometimes navigation APPs such as Google Maps will stil show positioning information while inside a building: this is not based on satellite data, but on Signals of Opportunity (3G/4G, WiFi, …), which can be exploited to have coarse positioning. Inertial Measurement Units (accelerometers, gyroscopes and magnetometers) can also be used when there is lack of visibility. If you know where you were before losing the GNSS signal, and you still know your acceleration, you can infer where you might be now. But this only lasts for a while, since this units have a drift, and if the GNSS shortage is too long, you end up somewhere different from where you are.

Duty cycling

XKCD #1872

GNSS receivers mounted on stuff that can move around, like smartphones or tracking devices, typically face another limitation that has nothing to do with accuracy: battery usage! In order to save some power, this devices switch on the GNSS chipset periodically and keep it switched off most of the time. This is a good approach if you don’t need to know the location of your device very often, but on the other hand it affects dramatically the accuracy. GNSS chipsets take a while to come up with a good solution, because they need to collect several samples. Once they have done so, they will continue delivering good quality results (unless you go into a building!). But if you periodically turn off the chipset, you are losing every time the information you had gathered before, and you can’t expect to have good results at all.

Differential GNSS processing

So far we have been talking about GNSS positioning using a single receiver. But combining the signals from multiple receivers can improve greatly the accuracy. Probably, the most well-known differential technique is RTK (Real-Time Kinematics). Like this, a higher-grade GNSS receiver is still needed to receive high quality measurements. But any other receiver in the vicinity can use this data to improve the accuracy of its own measurements. The “good” GNSS receivers can be either deployed manually, or the data taken from existing GNSS networks. There are GNSS stations all over the world, mostly from public administrations and research institutions studying seismology (Japan’s GEONET network, for instance, has >1300 stations in their territory). The only requirement for RTK to work is that the receiver needs to give access to the raw measurements.

GNSS stations tracked by Rokubun’s Positioning as a Service

Trends

As more and more things are going to move around autonomously in the future, such as drones, self-driving cars, tractors and robots, the need for precise global navigation solutions will increase. The typical tradeoff in GNSS used to be accuracy vs price. You could have a receiver providing you less than a centimeter of error in positioning, but you had to pay more than 10.000 € for it. This meant that high accuracy solutions weren’t scalable if you had to put GNSS receivers on vehicles that were sometimes more expensive than the vehicles themselves. On the other hand, one could also pay 50 € for a GNSS chipset, but then you wouldn’t be able to get anything better than 5m error.

The popularization of Differential GNSS by companies such as Swift Navigation, Emlid and Rokubun is changing this. The trend is to have at least one high-grade base station in the vicinity of the rover, and then as many lower-grade rovers as needed.

The amount of receivers supporting Differential GNSS is also increasing dramatically. For instance, until very recently, the GNSS receivers on smartphones were “black-boxes” and the raw measurements couldn’t be accessed. But this changed in 2016, with the introduction of Android 7.0. Since then, developers can access the pseudo-range information, and in some cases also the carrier phase signal. This is still a research field, but it opens up a lot of opportunities.

Another breakthrough in the GNSS sector has been the appearance of affordable dual-frequency chipsets, such as Broadcom’s BCM47755 and Ublox’s F9.

And finally, what about indoors positioning? Well, since Android 9.0 you can have unprecedented 1 to 2 meter error indoors using the IEEE 802.11mc Wi-Fi protocol.

Tools

--

--