Announcing the Winners of the SpaceNet 5 Challenge

Adam Van Etten
The DownLinQ
6 min readNov 26, 2019


Preface: SpaceNet LLC is a nonprofit organization dedicated to accelerating open source, artificial intelligence applied research for geospatial applications, specifically foundational mapping (i.e. building footprint & road network detection). SpaceNet is run in collaboration with CosmiQ Works, Maxar Technologies, Intel AI, Amazon Web Services (AWS), Capella Space, and Topcoder.

The SpaceNet 5 Challenge, which sought to identify road networks and optimal travel times directly from satellite imagery, is complete! Inferring up-to-date road networks and optimal routing paths is essential to many challenges in the humanitarian, military, and commercial domains. Current methods to update foundational mapping features such as road networks is often manually intensive and slow to update, even with large numbers of volunteers. The high revisit rates of existing and future satellite constellations have the potential to dramatically improve the response time for foundational mapping updates, provided such features can be extracted with high fidelity from satellite images. Enter SpaceNet, where high resolution imagery, meticulous hand labeled datasets, and public prize challenges help illuminate the current state of the art in computer vision and data science as applied to satellite imagery and foundational mapping. In this post we discuss the results of SpaceNet 5, along with details of the dataset and the heretofore unannounced final test city.

1. Training Dataset

Since its inception, a core feature of SpaceNet has been the release of high fidelity imagery in conjunction with high quality hand-curated labels. For SpaceNet 5 we publicly released imagery and road labels for two new training cities. These add to the existing corpus of labeled road datasets within SpaceNet with corresponding 30 cm imagery (bold cities are new):

  • Las Vegas, Nevada
  • Paris, France
  • Shanghai, China
  • Khartoum, Sudan
  • Moscow, Russia
  • Mumbai, India

The new cities of Moscow and Mumbai increase the diversity of road labels within the SpaceNet data corpus that now cover 4 continents, particularly given the dense urban nature of Mumbai and the snow present in Moscow (see Figure 1). Labels for each roadway are hand drawn, and include metadata features such as surface type, road type (primary, secondary, highway, etc.), and number of lanes. We use these metadata features to infer the safe travel speed for each roadway. Knowledge of the safe travel speed allows true optimal routing, since one can now minimize the travel time to any desired destination

Figure 1. SpaceNet 5 image chips in Moscow (left) and Mumbai (right) with attendant road labels colored from 20 mph (yellow) to 65 mph (red)

2. Testing Dataset and Public Leaderboard Standings

One of the critiques of public data science challenges is that solutions can sometimes be hyper-tuned and overtrained for the provided datasets. Thus, the submitted algorithms may work well on the provided data, yet break down when applied to slightly different data. For satellite imagery, performance of algorithms varies markedly across geographies, as noted in previous SpaceNet competitions and CosmiQ’s recent robustness study. Robustness of algorithms to new and potentially unseen geographies is crucial, so for the SpaceNet 5 public leaderboard we elected to score challenge contestants on a composite metric of 20% each for Moscow and Mumbai, and 60% on a new city without any training data: San Juan, Puerto Rico.

We score with the APLS metric, weighted to optimize travel time. We also released code for our CRESI public baseline model (explored in a number of previous blogs), and scored this model on the public testing data (under the username cosmiq_baseline). Reported APLS sores are multiplied by 100, thereby forming a percentage rather than a fraction. After the close of the challenge the public leaderboard appeared as follows:

Table 1. Standings on the SpaceNet 5 public leaderboard.

3. Final Testing — Unseen Locales

To further ensure that models cannot be overtrained, we recored final scores on a mystery city. Our mystery city benefits from few overhanging buildings, trees, or overpasses, yet facets such as narrow alleyways and a plethora of dirt roads complicate road extraction. We can now announce that this mystery location is the largest city in East Africa: Dar Es Salaam, Tanzania.

Figure 2. Sample Dar Es Salaam SpaceNet chips.

We weighted final scores heavily on this mystery city, with the final composite metric comprised of: 10% each for Moscow and Mumbai, 20% on San Juan, and 60% on Dar Es Salaam. We expected that the inclusion of a completely unseen region might lead to some shuffling of the public leaderboard, as some competitors appeared to have overtrained on the training set. To determine the final rankings, the SpaceNet team retrained each of the 10 submissions (to ensure the validity of the submitted codebases), and then applied these retrained models to the final test set.

Table 2. Final competitor standings.

Table 2 shows the final scores for each city, while Figure 3 illustrates these standings, with a very significant shuffling of the leaderboard. These top 5 competitors will share a $45,000 prize purse, with another $5,000 reserved for the top student competitor (in this case: ikibardin). Note that the winner, XD_XD, did not have the highest score on any single city, though in the aggregate still achieved the best result. Also note the difference in score between the top competitors is only ~5%. As with previous SpaceNet challenges, we will dive deeply into the winning algorithms in a series of subsequent posts.

Figure 3. Performance by city for the top 5 competitors, plus the CosmiQ baseline model.

Figure 4 displays example predictions from the winning model of XD_XD, illustrating that predictions in the mystery city of Dar Es Salaam are frequently quite good.

Figure 4. Top Left: Sample XD_XD prediction over Dar Es Salaam, with ground truth in blue and predictions in yellow; the time-weighted APLS score for this chip is 0.64. Top Right: Sample XD_XD prediction over Moscow; despite the fact that portions of Moscow are in the training set, the snow and shadows sometimes complicate road extraction, and the time-weighted APLS score for this chip is only 0.37. Bottom Left: Sample XD_XD prediction over San Juan — APLS = 0.78. Bottom Right: Sample XD_XD prediction over Mumbai — APLS = 0.61.

4. Conclusions

Our fifth SpaceNet Challenge is in the books, and our winner is XD_XD. Our Mystery City is Dar Es Salaam, and the winning algorithms all performed quite well on this city, despite a complete lack of training data in this location. Recall that the APLS metric used in this challenge measures the optimal temporal routes through the road network, taking into account road connectivity and speed limits. The performance of the winning algorithms implies that the road network inferred directly from satellite imagery would route you to your desired destination within 50% of the expected trip length. Stay tuned for upcoming posts where we dive deeper into the algorithms and lessons learned in SpaceNet 5. Also, we will be open sourcing the top 5 algorithms on the SpaceNet Github repository in the coming weeks.

Congratulations to all of the winners and challenge participants! And in addition to our SpaceNet 5 analyses, keep an eye out for information about SpaceNet 6 in the coming months!