Image for post
Image for post

Computer Vision With OpenStreetMap and SpaceNet — A Comparison

Adam Van Etten
Sep 9, 2019 · 5 min read

Now that the SpaceNet 5 dataset has been released, and the challenge is live on Topcoder, we anticipate a great many insights from this challenge into how well computer vision can automatically extract road networks and travel time estimates.

In support of the SpaceNet 5 challenge, this post seeks to provide motivation as to the utility of this new dataset. We also explore some of the capabilities that the SpaceNet challenges has helped inspire. In areas such as automated road network extraction, we demonstrate that such capabilities compare favorably to current state of the art methodologies, and may be able to contribute back to OpenStreetMap (OSM) to improve labels in difficult locales. See our arXiv paper for further details, and our previous post for our algorithmic approach.

1. OSM Data

2. SpaceNet Dataset

SpaceNet now incorporates 10 cities, with multiple imagery types (e.g. panchromatic, multispectral, RGB) and attendant hand-labeled building footprints and road centerlines. The most recent SpaceNet 5 dataset adds 4 cities to the SpaceNet corpus, with road centerline and metadata tags for these four cities (see the blog from our partner Maxar for details on how the SpaceNet team created the dataset). This data is ideal for comparing to OSM.

3. Comparison of SpaceNet-trained and OSM-trained Models

Image for post
Image for post
Figure 1: SpaceNet compared to OSM. Road predictions (yellow) and ground truth SpaceNet labels (blue) for a sample Las Vegas image chip. OSM model predictions (a) are slightly more offset from ground truth labels than SpaceNet model predictions (b).

3. Existing Algorithmic Approaches

4. Algorithmic Comparisons

4.1 Comparison on SpaceNet data

4.2 Comparison with Google / OSM Data

Image for post
Image for post
Figure 2. Road inference over New York City. (a) RoadTracer prediction (ground truth OSM labels in gray, predictions in yellow). (b) Our CRESI predictions (yellow) over the same area.

Figure 2 demonstrates that the SpaceNet — inspired CRESI model does a significantly better job of extracting roads over unknown cities than existing methods. Below we display a few more examples of road inference on test cities.

Image for post
Image for post
Image for post
Image for post
Image for post
Image for post
Figure 3. Performance comparison between RoadTracer (left column, OSM labels in gray, predictions in yellow) and CRESI (right column, predictions in yellow) for various cities. From top: Denver, Vancouver, Pittsburgh.

5. Conclusion

The DownLinQ

Welcome to the official blog of CosmiQ Works, an IQT Lab…

Thanks to Daniel Hogan and Jake Shermeyer

Adam Van Etten

Written by

The DownLinQ

Welcome to the official blog of CosmiQ Works, an IQT Lab dedicated to exploring the rapid advances delivered by artificial intelligence and geospatial startups, industry, academia, and the open source community

Adam Van Etten

Written by

The DownLinQ

Welcome to the official blog of CosmiQ Works, an IQT Lab dedicated to exploring the rapid advances delivered by artificial intelligence and geospatial startups, industry, academia, and the open source community

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store