The State of Sonars & “Seeing” Subterranean 🔊

Going deep on sonar systems.

Anthony DiMare
Bedrock Ocean Exploration
10 min readOct 17, 2019

--

This piece is going into how robots can see underwater over large distances.

I did my best to balance the underlying physics principles with easy to understand language (hopefully!). You shouldn’t need an engineering or physics degree to understand how this often misunderstood technology works so I did my best to distill it down while being informative.

Water is a tricky medium.

Beyond the crushing pressures at depth — it naturally absorbs energy.
Radio-waves, light (lasers) get eaten up and are rendered useless in 10–30m of propagation through the water. This means most of the sensor packages used to understand where + what’s around a robotic vehicle on land or in the air (LIDAR, cameras, radar, GPS, radio coms, etc). are rendered useless below the surface of the ocean.

Ironically, because most of the earth observation platforms are satellite-based and take advantage of all the sensors I just listed above that don’t penetrate water — humans haven’t been able to uncover the secrets of the deep fully.

The good news — while sound doesn’t propagate well through the air, it does in water.

So, sound waves are the physics principle behind the baseline sensor technology used to “see” underwater — we call this sonar.

Sonar outputs

Different types of sonars can output different types of data (not surprisingly). Before digging into what makes each of them unique we should go into the different types of data we can collect from them to build an understanding of the ocean depths.

Imaging

This is exactly what it sounds like — the output is a continuous image mosaic that can be stitched together as a sonar unit is propelled through the water. Think of having your phone camera on a continuous wide-angle pano-mode.

Video of imaging output

A note on frequency & imaging
Higher frequency sound produces crisper images, but the range is limited.
A lower frequency sound can travel much farther but at the expense of resolution.

CHIRP technology is a specific type of sonar transducer (emitter of sound) that can cycle between many different frequencies up and down the spectrum to build high-resolution, longer-range imaging. It also helps compensate and differentiate between different objects that reflect some frequencies better than others. While developed in the 1950s, this technique was introduced in a broader commercial way when processing power got cheap in the early 2000s.

Bathymetric & Ranging

Bathymetry is the underwater version of topography. It’s a ranged point cloud generated over an area to create an interpretive (and hopefully very accurate) geometric surface that is close to what the seafloor looks like.

Ranging is used to understand how far general objects are away. The core principle is the same as bathymetry but is applied more generally than just seafloor topography.

Backscatter

It’s a measure of “reflectivity” of the ocean floor. The roughness and hardness (or acoustic impedance) of the seafloor are the two key parameters that vary with geological and biological characteristics. Think about the difference between sound hitting loose mud vs a piece of granite. One would expect sound to reflect better off of the rock. Because you’re measuring how “reflective” the seafloor is, this measurement can be used to classify what is actually on the bottom — sand, mud, bedrock, metal, wood, etc. It’s extremely useful (and often commercially important) to augment imaging and bathymetry with this data.

It’s often visualized in the form of an image and does often look like a sonar image itself. However, what you’re seeing is differences in ocean floor composition.

Sub-bottom profiling

This is the equivalent of using sound as an x-ray to “see” geophysical layers that make up the seafloor. What you end up with is a layer map of the seafloor. The more powerful your sonar, the deeper underground you can see. You can combine the “slices” of the ground to create 3d interpretations of what’s below the seafloor without actually doing coring samples (see below)

It can be used to find objects buried in silt or on the other extreme — finding oil well pockets in the bedrock.

Types of Sonars

Since it’s all just sound, why can’t every sonar “do it all”? This is by far one of the most misunderstood parts of sonars and reasonably so. It’s all about the beam that is emitted out of the transducers, the properties of the beam, and the processing done after the receiver picks up the reflected sound signal.

Beam properties are generally: beam shape, # of beams, orientation, frequency(s), phase of the signal, and processing technique. In the interest of being breif-ish, we’ll dig into the main strengths and weaknesses of the main types of sonars used for hydrographic, and geophysical surveying.

Side-scan sonar (SSS)

Dataset: Imaging
Beam pattern: 2, very wide fan-like beams that are directed out at max angles approaching 120°- 160°.

Characteristics: SSS is the premier large swath imaging sonar. It’s measuring the intensity of the return pulse over an amount of time. It can take very high-resolution images over extremely large areas. While you can’t see color, it’s incredibly useful for seeing objects because of the characteristic shadows that appear in the images it produces (as demonstrated above with the shadow behind the rock, and in the brown ship image from further above).

A downside is the lack of imaging information you get directly below the device holding the 2 side scan transducers. This information is usually filled in with another downward-facing sonar like a MBES.

Resolution is generally a function of altitude of the SSS above the seafloor. The higher above, the worse the resolution, but the larger the swath imaging area. The closer to the seafloor, the better the resolution, but small imaged area. As you can imagine this becomes challenging to balance in varying terrain.

Single-beam echo sounders (SBES)

Datasets: Ranging, Bathymetry
Beam pattern: A single focused small-angle beam (1–0.5°). (see right side of the picture below)

Characteristics: Because the ensonified area (the area hit with sound by a sonar) by a single narrow beam is very small, it’s used in a way to cheaply identify general under-vehicle clearance. It can also be used for obstacle avoidance

Scanning sonar

Dataset: Ranging, Imaging
Beam pattern: A single beam that has a very narrow horizontal beam (2–0.5°), but wide vertical beam (20–60+°). This creates a fan-like pattern. It’s then rotated along a vertical axis.

Characteristics: We are all familiar with this type of sonar — we know the concept well from the stereotypical submarine control room shots in Navy war films when a torpedo inevitably is shot toward at the vehicle of the main character. It creates a 360° map of hard objects around the axis of the rotating sonar. It can also be used as an imaging sonar like a side-scan sonar beam that rotated. The end data here depends on the processing and type of beam (fan or laser-like).

Multibeam echo sounders (MBES)

Datasets: Bathymetry, Backscatter, Imaging (poor at imaging)
Beam pattern: Up to several hundred very narrow beams that are configured in a fan-like pattern to create a wide ensonified swath. This means that data from each beam can be distinguished from another which has several benefits.

Characteristics: Think of this like LIDAR with sound instead of lasers. This is the premier bathymetry sonar system. You get high-resolution point clouds of the bottom of the seafloor over a wide swath (120°). You can also measure the reflectivity of each individual beam to give you a backscatter image as well (amazing).

Interestingly, there’s a new field of post-processing for multibeam systems where the noise within the raw multibeam data is used to get water column profiling (measuring layers of different water temp which changes the speed of sound through water + biomass between the bottom of the sensor and the ocean-floor). As post-processing continues to get cheaper and trickles into the workflow of teams that use this data, I suspect a plethora of new information to emerge from good quality data that’s been collected already about the water column.

Sub-bottom profilers (SBP)

Dataset: Sub-Bottom Profiling (shocker)
Beam pattern: a very concentrated single narrow-beam (exactly the same as SBESs).
Characteristics: It creates a single layering line of under the seafloor. There is no swath (area) for SBPs without expanding the number of SBPs over a horizontal distance perpendicular to the direction of movement. These transducers are often called “boomers” because the sound is usually a very low frequency, with a lot of power behind it — almost seismic in nature. The downside, these are very power-hungry sonars. If you want deeper profiles you need more power behind the sound pulse. So if you want to have a SBP on a small AUV, you have to give up depth underneath you see and resolution or sampling rate.

Synthetic Aperture Sonar (SAS)

Datasets: Imaging, Bathymetry, Backscatter
Beam pattern: See the right image diagram. It’s multiple wide sonar beams emitted in pulses like a SSS, but then digitally sticked together to get a lot of information over each part of an ensonified grid.

Characteristics: This is the state of the art at the moment. Unfortunately, that makes it still exceedingly expensive. The benefits of this are clear. You get long-range, ultra-high-resolution images (cm level resolution). Additional post-processing is done to create a deeper and more-detailed understanding of any point from multiple different sonar pulses, all from different angles. Just look at the type of images it can create:

😱😱😱😱😱😱😱😱- image: Kracken Robotics — MINSAS

Because each processed point has multiple different sources of information on it, you can also use interferometry to also get accurate-ish bathymetric and backscatter data as well. now you’re seriously upping the amount of processing needed, but the early results are pretty amazing and it’s not surprising that this will be the future of the space.

The kicker — like side scan, you still get “shadows” in the bathymetric & imaging data.

Price

One of the big challenges with any subsea robotic endeavor is the cost of sonar technology. Traditionally, each unit was only expected to sell in the 10’s and the client was pretty much always a Navy.

Even 5 years ago — depth rated commercial or government-grade imaging or bathymetry sonars (we’ll get to this) were priced well over $100k each — you didn’t even have a vehicle yet. Because of this, most of the sonar systems were designed for military and gov-only customers expecting units to be sold in the 100’s. They used their own electronics, they didn’t leverage the available processing power newer chips gave them access to, and most interestingly — didn’t see a need to miniaturize the total package.

However, in 2016 we saw a change. We saw a drastic price and size reduction happen with a select few sonars — sub $30k imaging. Sub $50k bathymetry. These companies took a bet the space would begin to miniaturize. It paid off.

In early 2017, we saw the introduction of smaller-than-ever μUUVs (unmanned underwater vehicles). That was commercialized and buyable in early/mid-2018. These small UUV’s had much smaller overall form factors + and therefore a much smaller power budgets because the batteries were much smaller as well. They needed sonar systems that could operate with very low power draw, and could subsequently fit within an 8in diameter small torpedo-shaped robot.

So in 2018 & 2019 we saw sonar systems that could be integrated directly with these μUUVs. They dropped prices, expecting μUUVs to increase the number of units they could sell (which did happen).

In late 2019 — we can get commercial-grade SSS imaging for $5k, SBES for $300, MBES for sub $20k, and SBP for sub $40k.

Unfortunately, the lower end of the SAS’s are still around $130k each and are not small enough yet for small AUV’s. I expect this will see a reduction in price as well and size over the next few years.

It’s easy to see where this trend is heading though. We intend to ride this wave new wave.

This is ~ a 10x reduction in cost and 4x (sometimes more) reduction in size for the same output specs since 2010.

Some conclusions

We are finally entering an age of affordable sonar sensing with some reasonable quality and specs to boot. This was a critical milestone to open up the production of a large robotic system for ocean exploration (think planet labs for the ocean floor).

While no sonar alone gives enough information to “see the whole picture”, they’re now cheap enough that you could reasonably have many different types on one vehicle, and not go out of business if you lost it.

This means we can now begin to shine a much-needed light on the part of our world that’s pitch black and completely unknown.

I don’t know about you — but I’m excited to see what we’re going to find!

--

--

Anthony DiMare
Bedrock Ocean Exploration

Building Bedrock — CEO & Co-founder. Co-founder of Nautilus Labs.