Computer Vision With OpenStreetMap and SpaceNet — A Comparison

Adam Van Etten
The DownLinQ
Published in
5 min readSep 9, 2019

--

Now that the SpaceNet 5 dataset has been released, and the challenge is live on Topcoder, we anticipate a great many insights from this challenge into how well computer vision can automatically extract road networks and travel time estimates.

In support of the SpaceNet 5 challenge, this post seeks to provide motivation as to the utility of this new dataset. We also explore some of the capabilities that the SpaceNet challenges has helped inspire. In areas such as automated road network extraction, we demonstrate that such capabilities compare favorably to current state of the art methodologies, and may be able to contribute back to OpenStreetMap (OSM) to improve labels in difficult locales. See our arXiv paper for further details, and our previous post for our algorithmic approach.

1. OSM Data

In many regions of the world OpenStreetMap (OSM) road networks are remarkably complete. Yet, in developing nations OSM labels are often missing metadata tags (such as speed limit or number of lanes), or are poorly registered with overhead imagery (i.e., labels are offset from the coordinate system of the imagery). See Section 2 of our blog on road network extraction at scale for further details. An active community works hard to keep the road network up to date, but such tasks can be challenging and time consuming in the face of large scale disasters. For example, following Hurricane Maria, it took the Humanitarian OpenStreetMap Team (HOT) over two months to fully map Puerto Rico.

2. SpaceNet Dataset

The frequent revisits of satellite imaging constellations may accelerate existing efforts to quickly update road network and routing information. A fully automated approach to road network extraction and travel time estimation from satellite imagery therefore warrants investigation. Such an investigation requires a large and well-labeled dataset, and this is exactly what the SpaceNet dataset aims to accomplish.

SpaceNet now incorporates 10 cities, with multiple imagery types (e.g. panchromatic, multispectral, RGB) and attendant hand-labeled building footprints and road centerlines. The most recent SpaceNet 5 dataset adds 4 cities to the SpaceNet corpus, with road centerline and metadata tags for these four cities (see the blog from our partner Maxar for details on how the SpaceNet team created the dataset). This data is ideal for comparing to OSM.

3. Comparison of SpaceNet-trained and OSM-trained Models

We showed in our road network extraction at scale post that CRESI models trained and tested on SpaceNet labels is superior (64% improvement in APLS score) to models trained and tested on OSM data. This is possibly due in part to the the more uniform labeling schema and validation procedures adopted by the SpaceNet labeling team, and the more precise spatial registration of imagery and labels for the SpaceNet data.

Figure 1: SpaceNet compared to OSM. Road predictions (yellow) and ground truth SpaceNet labels (blue) for a sample Las Vegas image chip. OSM model predictions (a) are slightly more offset from ground truth labels than SpaceNet model predictions (b).

3. Existing Algorithmic Approaches

A couple of notable papers recently explored road network extraction from overhead imagery. MIT’s RoadTracer paper utilized an interesting approach that used OSM labels to directly extract road networks from imagery without intermediate steps such as segmentation. This paper used Google overhead imagery at 60 cm resolutions (recall that the SpaceNet roads datasets are at 30 cm resolution). While this approach is compelling, according to the authors it “struggled in areas where roads were close together’’ [1] and underperforms other techniques such as segmentation + post-processing when applied to higher resolution SpaceNet data with dense labels. In another approach, Batra et al 2019 used a connectivity task called they termed Orientation Learning, combined with a stacked-convolutional module to effectively utilize the mutual information between orientation learning and segmentation tasks to extract road networks from satellite imagery, noting improved performance over RoadTracer.

4. Algorithmic Comparisons

We compare the RoadTracer and Orientation Learning results to our CRESI model that combines code and lessons learned from SpaceNet with internal research by CosmiQ.

4.1 Comparison on SpaceNet data

The SpaceNet challenge inspired models yield APLS=0.67, which is a 5% improvement over the Orientation Learning paper when applied to SpaceNet test data (see our arXiv paper for further details).

4.2 Comparison with Google / OSM Data

We also evaluate performance with the satellite imagery corpus used by RoadTracer. This dataset consists of Google satellite imagery at 60 cm/pixel over 40 cities, 25 for training and 15 for testing. Vector labels are scraped from OSM, and we use these labels to train a CRESI model to predict the road network. We note a significant 23% improvement over RoadTracer in performance with our method, which is illustrated below.

Figure 2. Road inference over New York City. (a) RoadTracer prediction (ground truth OSM labels in gray, predictions in yellow). (b) Our CRESI predictions (yellow) over the same area.

Figure 2 demonstrates that the SpaceNet — inspired CRESI model does a significantly better job of extracting roads over unknown cities than existing methods. Below we display a few more examples of road inference on test cities.

Figure 3. Performance comparison between RoadTracer (left column, OSM labels in gray, predictions in yellow) and CRESI (right column, predictions in yellow) for various cities. From top: Denver, Vancouver, Pittsburgh.

5. Conclusion

In this post we demonstrated that algorithms derived from the SpaceNet challenge and using SpaceNet data provide superior performance to previous methods in extracting roads topology from satellite imagery. We note a 5% improvement over published efforts on the SpaceNet dataset. We also note a significant 23% improvement over previous efforts with Google satellite imagery + OSM labels using SpaceNet derived methods, implying that the lessons learned from SpaceNet apply to a diverse problem set. We look forward to the even greater progress in automated road network extraction and optimized routing that derive from the current ongoing SpaceNet Challenge 5. Stay tuned for ongoing updates on the challenge and an updated baseline algorithm using the Solaris pipeline.

--

--