LiDAR-based Orthorectification

Pramukta Kumar
4 min readApr 3, 2018

--

Okay so there’s going to be some pretty substantial geonerding going on here. First of all check out how much this picture of the exact same area moves around between collections. This is what you get by default when you use, for example GBDX Notebooks.

Austin, TX. Images courtesy DigitalGlobe

DigitalGlobe’s satellites know their position and orientation ridiculously well, and their sensor models, even in approximate form, are eerily accurate. So the only reason for this is that surface model used to orthorectify the images is just not very good. Let’s see if we can improve this.

LiDAR Adventures: z-Axis Coordinate Systems

Point cloud data can be created in a variety of ways. One of the coolest (and most expensive) is LiDAR, or Light Detection and Ranging. Using airborne LiDAR and some work, people can create detailed 3D maps of an environment (for example, this ancient Mayan city). Since this involves knowing distances in both horizontal and vertical directions, the next natural question is distance from what?

Enter Projections

Anyone working with geospatial data has had to deal with projections. While they sometimes elicit looks of disgust (particularly from neo-geographers), they are actually very practical ways to work with human relevant distances around the (definitely not flat) earth. And while it may be a bit annoying to hear people quote some obscure EPSG code as if it is supposed to be meaningful to everyone, the remarkable level of standardization in specifying and discussing projections has led to a great set of tools (usually built around PROJ, thank you USGS/OSGeo) that let folks mostly forget the mathematical details and just get on with their day. For example, here is a little recipe that uses pyproj to reproject Shapely geometries.

When working with point clouds we aren’t forcing a 3D surface into a 2D plane, so technically this isn’t necessary…except that in practice, distances of interest are in relation to the Earth’s surface, so boom, projections again. One of the catches with projections is that they are defined with respect to an ellipsoid and datum. What this means is that they are coordinates along a specifically shaped object AND those coordinates change in time. Common standardized ellipsoids are WGS84 (which is an ellipsoid and datum) and GRS80, which are gravitationally derived ellipsoidal models of the earth as a whole. GRS80 is commonly paired with the NAD83 datum (made to be a pretty good set of corrections for North America).

What is Sea-Level Anyway

SRTM derived DEMs (~90m resolution global elevation data) are typically (at least the ones I’ve seen) delivered as height measurements from the WGS84 ellipsoid. This being graviationally derived, it doesn’t lend itself to say, preventing a cargo ship from running aground, or figuring out if your city will flood. Also how did they do this before we knew how to send satellites into orbit? (Actually super interesting, and important scientific history. If you are mathematically inclined, I’d recommend reading this book. It will be painful).

Anyway the point is, LiDAR data I’ve seen is measured with respect to a datum called NAVD88 with a particular geoid (2012a), which refers to a shape corresponding to mean sea level in the vicinity, that’s currently derived from sea level measurements averaged over 19 years. So when I go to OpenTopography and grab a dataset for Austin, I need to figure out how to convert it to a reference frame based on WGS84 or maybe NAD83/GRS80 before I use it in combination with data and models I’m used to. And the tooling is not as slick (maybe PDAL can do it, I couldn’t tell).

Luckily, the difference between NAVD88 and WGS84 doesn’t change a whole lot over localized areas (where we are likely to have point clouds) so usually adding a constant is a good approximation. NOAA gives us online and offline tools to figure out what this constant is.

All Together Now

Now, using the freely available LiDAR point cloud data, we can construct a DSM in the right horizontal projection, as well as vertical projection. When we orthorectify DigitalGlobe imagery using this as the elevation dataset, we get the following result. And I haven’t even done a NAD83/GRS80 to WGS84 correction, so this should get better.

Austin, TX again. Images courtesy DigitalGlobe

How about them apples.

--

--