Preface: SpaceNet LLC is a nonprofit organization dedicated to accelerating open source, artificial intelligence applied research for geospatial applications, specifically foundational mapping (i.e. building footprint & road network detection). SpaceNet is run in collaboration with CosmiQ Works, Maxar Technologies, Intel AI, Amazon Web Services (AWS), Capella Space, Topcoder, and IEEE GRSS.
Despite its application to myriad humanitarian and civil use cases, automated road network extraction from overhead imagery remains quite challenging, as we’ve discussed in previous posts (1, 2, 3). The SpaceNet 5 challenge was able to make significant inroads (pun intended) into this task, however, with top participants able to extract both road networks and speed/travel time estimates for each roadway. In this, our final installment of the SpaceNet 5 blog series, we announce the release of the winning models and discuss the tradeoff between inference speed and model performance.
1. SpaceNet 5 Algorithmic Baseline
SpaceNet 5 asked participants to build routable road networks (with speed estimates) directly from satellite imagery over a diverse set of geographies: Moscow, Mumbai, San Juan, and Dar Es Salaam. The provided labels are hand-traced road centerlines with attendant metadata such as number of lanes, road type, surface type, and speed limit estimates. This complex task was initially undertaken by the public baseline released by CosmiQ, called CRESI. This process is briefly summarized in Figure 1, and further details can be found here.
2. SpaceNet 5 Winning Algorithms
The algorithms submitted to SpaceNet 5 all adopted a similar approach to the baseline, with the primary variation being the type (and number) of segmentation models used and the post-processing parameters. The top 5 winning participants ensembled anywhere from 4 to 15 segmentation models together in a bid to increase performance over the baseline model. The more sophisticated models and ensembling practices of the winning participants provided an APLS_time performance boost over the baseline model, in the range of 0.1% to 6%. See Table 1 for further details.
3. Speed / Performance Trade-Off
While the more sophisticated models and ensembling practices of the winning participants yielded a performance boost, they do come at a cost of execution speed. In Figure 2 we display the APLS_time score as a function of prediction speed. Our hardware is a NVIDIA DevBox with 4 Titan X GPUs. In many scenarios the high speed of the baseline model may be more important than the slight performance boost provided by the other models. Yet if predictive performance is of paramount importance the winning submission of XD_XD is preferable, which also has the added benefit of being faster than the 2nd, 3rd, and 4th place submissions.
4. Model Release
We are pleased to announce that the winning algorithms and model weights have been open-sourced. Model weights trained on the 6 cities of the SpaceNet 3+5 training dataset (Las Vegas, Paris, Shanghai, Khartoum, Moscow, Mumbai) for each of the winning submissions can be found in the SpaceNet s3 bucket:
Code for the winning algorithms can be found at the site below:
We include dockerfile instructions to easily deploy these algorithms, along with a permissive Apache 2.0 license to encourage their use. While these models remain imperfect, in many scenes road prediction is quite impressive, as illustrated in Figure 3.
Over the last few blog posts, we’ve explored the geographic diversity of SpaceNet 5, delved into the chaotic nature of local predictions, and discovered predictive features for road network extraction performance. Clearly, there are many lessons to be learned from the SpaceNet 5 dataset and challenge. Yet we are finally at the end of the road for our post-challenge analysis with this post: announcing the release of the winning code and model weights. Even though SpaceNet 5 is complete, we will continue to explore roads and routing; we hope that the SpaceNet 5 data, code, and model weights will aid the interested reader in similar explorations.