Here’s how we mapped the ocean. Why so little?

With today’s mapping systems, it will take us 970 ship years. We need other options. (Part 2 of 3)

Anthony DiMare
Bedrock Ocean Exploration
13 min readNov 20, 2019

--

If New York City was at the bottom of the ocean, chances are we wouldn’t even know it was there. That’s because only ~15% of our current map of the ocean is mapped in a high enough resolution to detect objects smaller than a few kilometers wide.

Without a clear understanding of what our world is comprised of, we’re lacking the information we need to predict changing weather patterns, hurricanes, and tsunamis. We’re unable to give scientists the information they need to understand the fundamental forces behind continued human existence on the planet. We’re missing the secrets that lie on the bottom of the ocean.

GEBCO — 85% of the ocean is like (left) to what it looks like when you map with modern sensors (right). Manhattan is 3.7km wide (at its widest point) and 21.6km long. The Grand Canyon, for context, averages 16km wide.

How can this be?

First, we need to understand what we use to map (sensors) and how we get them around the ocean (vehicles). These are grouped into missions called subsea surveys, which are programmatic ways to bring specific sensors over specific areas of the ocean.

How we map the oceans simply put is just [sensor(s)] + [vehicle path] = [georeferenced data].

Of course, trying to get this to work cheaply and scalably has been the problem. Different vehicles and sensors have different advantages and disadvantages for specific types of data and the required quality/resolution of that data.

The sensors that matter underwater

First, let’s look at the sonars we care about:

  • Side Scan Sonar (SSS) — gives you subsea imaging. Generally must be close to the bottom for good data collection.
  • Multibeam Echo Sounders (MBES) — Think of this like LIDAR, but with sound beams instead of lasers.
    The closer to the bottom the sensor is: the better the resolution, the smaller the swath (detectable area), the cheaper the sensor.
    The farther from the bottom, the worse the resolution, the larger the swath, and the more expensive the sensor is.
  • Sub-bottom Profiler (SBP) — Like X-ray for the geological layers below the seafloor.
    The closer to the bottom, the better the resolution, the closer a receiver (microphone) can be behind the seismic boomer or transducer (a very powerful ceramic speaker), and the cheaper the sensor suite is.
    The farther from the bottom, the worse the resolution, the farther a receiver has to be from the sonar transducer.

I’ve written more about the specifics of these sonars and go into much more detail in this post on how we use sonars to “see” and understand the underwater world.

All the other relevant sensors:

  • Conductivity, Temperature, & Density (CTD) — This is required for sound velocity profiling needed to correct sonar waves speed traveling through the water column with many different temperature gradients. Sound travels different speeds in different water temperatures. So, the farther this sensor is from the seafloor, the less accurate your sonar processing corrections are going to be and the less accurate your sonar measurement is. This is typically an issue for surface vessels which often have to probe the whole water column with a CTD to collect any reliable data from sonars.
  • Magnetometer — Collects magnetic signatures over an area.
    To get good data, it has to be close to the bottom < 100m.
  • Cameras — For capturing color images, and video.
    It requires you to be < 30m away and if the water is deeper than 30m, it requires energy-hungry lighting (because it’s pitch black after you go more than 30m deep) which is a significant reason it’s not widely used in seafloor mapping but is great for verifying what is actually there once you already have a hunch from your other sensors.
  • LIDAR — For capturing point cloud data. It has a limited range of only 20–30m so you have to be extremely close to the bottom, and therefore is rarely used unless you have a specific item you’re trying to get extremely high-resolution point clouds of.

Between all these different types of sensors, we can get a pretty comprehensive amount of information useful for all different types of maps and data products. Part 3 of this series is going to go into what kinds of maps we can build with the data collected from these sensors — it’s quite remarkable.

Surface Vessels vs. Towfish vs. Autonomous Underwater Vehicles (AUVs)

Now for the different ways we get these sensors to different parts of the ocean.

The largest factors needing consideration for each of these vehicles are:
1. Where each can physically get sensors to in relation to the seafloor.
2. What resolution will it max out at due to the distance away from the bottom?
3. At what cost (price $$ and energy draw) is required to own and operate a sensor with a specific type of performance in that particular sea geography.

Surface Vessel Surveys

Surface vessels stay primarily on the surface of the ocean or are very near-surface (<5m). Most ocean surveys have been, and are still done, via surface vessels.

The infrastructure required for surface vessels — the sensors, the shipboard systems, the crew — is expensive, but has historically been necessary because we’ve lacked truly autonomous solutions for most of humanity’s history doing ocean surveys. So let’s first dig into manned vessels and then into some of the autonomous surface solutions.

Manned Surface Vessels (Research Vessels)
Most of the past ocean surveys have been, and are still done, via manned surface vessels.

NOAA Survey Vessel — 1 of 4 in the research fleet.

It takes a large surface vessel serious transit time to get to many locations of interest. You can be the largest most respected survey company in the world, but if you don’t have a ship anywhere close to where it’s needed, you either have to sail it there or hire out the job to someone local.

There was also traditionally a lack of connectivity in many parts of the ocean. Even now, sending terabytes of data over satellites is not practical or economic. So people do most of the data processing and QA on the vessels, close to the job at sea instead of back onshore — which makes sense. If they got bad data, they could easily sweep back around and recollect, as opposed to going back to shore and realizing there was an issue with the survey. What this means is the majority of the solutions available are built to operate off-grid, at sea and do not leverage the vast amount of computing power offered by being both cloud & locally-enabled.

However, as fringe compute capabilities have exploded, there’s a new opportunity to eliminate the need for a big ship to house a lot of people (40–80) and a lot of specialized, expensive marine-rated equipment.

Generally, surface-only solutions have a couple of clear advantages and disadvantages. The pro s— it has the largest possible swath areas, so you can collect more data on a single pass, and you can get a whole team and equipment to an area. The cons — cannot operate in bad weather (too much rocking). You also can’t operate at night.

Not to mention, the cost of operating and maintaining a ship that has to house a crew is immense. This does not scale well.

In the case of a classic research vessel outfitted with an MBES survey sonar (no sensors in tow), we’re also fairly limited to what data we can collect because the sensors that work at full ocean depth are limited and ultra-expensive. Primarily, you see only an MBES, and you still have to probe the water column with a CTD every day to get your sound velocity profile for sonar processing correction later.

If you’re doing anything besides shallow water surveys, the resolution is limited. You can’t get imaging, You can’t get sub-bottom. And you can’t get magnetics.

This existing infrastructure has anchored subsea surveys as quite expense operations to run.

Autonomous Surface Vessels (ASVs)
To help address the economics, we’ve begun to forego the need for people on the surface. This drops the cost. However, you’re still stuck with the same physical restriction of being on the surface. Sensors are expensive and power-hungry. You’re also very limited by weather, usually much more so than with larger manned surface ships because the platforms just aren’t as big and therefore aren’t as stable.

That being said, it’s a much better way to run a surface operation if the seas are cooperating.

Because of the restrictions above, we’re only seeing these near shore, in generally geographically protected areas. However, it’s illegal in many waterways to have an ASV without an operator. So there’s some contention with local authorities on how these will or will not pan out.

Let’s assume most of the legal issues do get solved (as I suspect they will).

In the case of fleets of ASV’s or swarms all working together to accomplish a bigger mission— think of Saildrone, or what XOcean is doing, or someone like Liquid Robotics — they dramatically improve the operational costs associated with surface-based surveys.

Saildrone — building survey infrastructure into some vehicles at the expense of other sensors.
Liquid Robotics — Ironically the instability of the waves, is what moves their platform.

Saildrone and Liquid Robotics have solved some of the endurance issues, they use nature to move. However, this makes adhering to discrete survey patterns a bit more difficult to guarantee. In Saildrone’s case, mapping sonars are more power-hungry than their originally designed sensor payloads. So they have inherently limited ranges compared to what they originally designed their vehicles to do.

XOcean decided to use fuel, which gives them navigation freedom. But when they hit empty (13 days max) they have to refuel somewhere. So it’s either a larger mothership with a crane or back to shore.

That said, without massively stabilized platforms, all of their MBES data is restricted to good weather days when the seas are relatively tame.

This restricts surveys to certain times of the year when the weather is generally predicted to be good. In the deepsea, they aren’t able to tow sensors close to the bottom.

They all will miss out on high-quality imaging, sub-bottom profiling, and magnetics. So once again, we aren’t getting a full picture here either.

Surface Vessel Survey — w/ Tow Fish

This is a blend of surface and subsea data collection techniques and it still requires a large manned surface vessel. Because some sensors need to be closer to the seafloor, a vessel will tow one or many different sensor arrays based on the needs of the specific sensor. These are called tow fishes.

Kraken Robotics — Katfish 180

None of the ASV platforms can use tow fish due to the power it takes to drag something that deep in the ocean.

Here’s a sense of what it would take for a full sensor suite from the surface, using tow fishes:

NOAA — what a full sensor suite looks like with a tow fish setup.

It solves many of the sensor restrictions associated with only having a vessel at the surface without having to invest in AUVs. However, it does add several layers of mechanical complexity to the ship (prone to failure and line getting caught in netting). This is also very expensive. Because the control and steering of these tow fishes are dictated by the speed, and stability of the ship at the surface, steady seas are again required for good quality data collection below.

You end up with a bunch of different towed devices to get the full sensor suite because they all have different requirements. You can do it, but it’s expensive — it doesn’t scale well and requires a large team at the top to manage the devices below. It was the evolution of solely the surface vessel attempting to get more data with the same pass.

Surface Vessel — w/ AUVs for Survey

This was the logical next step, launching capable AUVs which eliminate the need for tethers dragging behind the ship. Essentially the ship becomes mission control — a hub for many AUVs which still use the manned ship as a place to recharge, and dump data.

Ocean Infinity — the launch of a HUGIN AUV. Imagine if there’s 10ft+ seas. It becomes a barrier.

The only company that has managed to operationalize this is Ocean Infinity.

And the only AUVs that are reliable enough off-the-shelf for this right now, are Kongsberg Hugin’s. They’re about $6M each, built originally for military purposes in 1997. The platform largely hasn’t changed since then.

This solves all the problems with getting the right sensors close to the bottom but doesn’t solve cost and scalability. While more scalable than a just a ship with a tow fish, it requires a ton of manpower and capital to operate and maintain this operation.

Similarly to the above surface surveys, you can’t launch and recover the AUVs when there’s bad weather and unstable seas.

This is a serious improvement — but absurdly expensive.

Ocean Infinity — AUV bay in the back of a large research vessel.

Subsea Survey — miniaturized UUVs/AUVs

As I said above — the only reliable survey platform in the AUV space seems to be the Kongsberg HUGINs and they require a surface support vessel. Other vehicle manufacturers do make smaller AUVs: Hydroid (owned by Kongsberg), OceanServer (owned by L3), Bluefin Robotics (owned by General Dynamics), Gavia (owned by Teledyne). Notice the defense company trend there.

These AUV platforms were all created in the early 2000s. While cheaper than the HUGINs, they are still very expensive and the sensors were not of comparable quality with something that a HUGIN or tow fish was able to carry.

In 2017/2018, something happened in the AUV space — the effects of which I believe will change the landscape of the subsea world forever.

Companies like Riptide (now owned by BAE Systems) and RTSys began making much smaller, cheaper AUV platforms with the advances in miniaturized, energy-efficient compute and advancements in battery and miniaturized sensor technologies that followed these miniaturized form factors.

For the first time, it allowed you to get cheaper sensors closer to the bottom, getting better resolution for under $100k. It’s possible to get all the sensors onboard needed to build the whole subsea picture.

The issue here preventing larger adoption is the software onboard these smaller vehicles is not autonomous-complete. There is no infrastructure (hardware and software) built to launch and recover them autonomously. Because of this, they still require a surface ship to deploy and operate them, and the software used to control them isn’t nearly robust enough to scale into the hundreds.

How long is mapping going to take?

So here’s where things get interesting. If you assume the best way to do this globally is with manned surface vessels and high powered expensive MBES’s the experts at GEBCO and the US Center of Coastal Ocean Mapping (CCOM) — say it will take 970 ship years with current systems (of which refueling and transit time are not accounted for).

GEBCO Seabed 2030– Percentages of the ocean at certain depths. 0–200m depth = ~7% of the ocean.

Of the 970 years, ~64% or 620 years will be required to map the shallow water area between 0 and 200m deep. Meaning, 64% of the projected survey time is required to collect 7% of the ocean floor.

All of this represents our current assumptions in today’s current technology state unless… someone can change the scalability of the systems used.

Conclusions

To map the oceans, we can either increase the swath area from the surface — scale the number of surface vessels with expensive sensors — and miss out on troves of additional data we can collect if we’re closer to the bottom…

OR, we can increase the total number of cheap sensors in the ocean by building software to make managing many smaller, cheaper vehicles reliable, robust, and autonomous to collect a full picture of the seafloor along the way. The systems we’ve built to date to collect this information prohibit us from doing this. We need to increase the speed and total amount of data that we’re collecting and we need to make sure we’re doing it in an economically-scalable way.

People currently go through great lengths and costs to collect information before working in the ocean. It’s usually done on specific one-off voyages or missions. It’s not efficient, it’s expensive, and there’s a lot of wasted time and effort to get this information only to end up in some silo-ed off place from the world.

We’ve reached a point where software can be built to fully automate cheaper, smaller robots — and we have the smaller, cheaper, high-quality sensors to put on those robots.

I suspect the Saildrones of the world will own ocean surface data collection. But, inherent physical disadvantages will make it hard to collect and commercialize high resolution, comprehensive seafloor data. And the only way to do that at scale is with cheap, highly autonomous AUVs.

What happens when you have all of this data? You open up access to unparalleled amounts of information for every commercial, scientific, government, and military organization worldwide. The implications of this are enormous.

For more on what that will look like, stay tuned for Part 3.

To the depths and beyond.

This is Part 2 of 3 in a series written to explain the ocean mapping problem better.

Part 1 of 3 dug into what our current map of the ocean is actually made up of and how little we know about it.

Part 3 of 3 will be digging into what should (and what could) a map of the ocean include, and what it would mean for science, business, government, and militaries if we could collect it all.

If you found this interesting, please give it a clap or two and please, please please share. If you want this in your inbox, feel free to sign up here.

--

--

Anthony DiMare
Bedrock Ocean Exploration

Building Bedrock — CEO & Co-founder. Co-founder of Nautilus Labs.