SpaceNet: Winning Implementations and New Imagery Release

Todd Stavish
The DownLinQ
Published in
3 min readFeb 24, 2017

Recently, we announced the winners of the Rio de Janeiro building footprint extraction competition. In the announcement, we promised to open source the winning implementations and release satellite imagery over additional cities. As of today, the source code for the winning implementations can be found in the SpaceNetChallenge GitHub repository. Additionally, satellite imagery for four new cities is currently available via SpaceNet on AWS. Please continue to read for a summary of the implementation approaches and details on the new imagery.

Winning Implementations

The winning implementation was developed by Brazilian TopCoder wleite with a final footprint evaluation metric score of 0.255292. His implementation was custom and did not leverage deep learning frameworks. Generally, his approach was to use random forests with brute force polygon search. He followed a three step process:

  1. Classify pixels in the image into 3 categories: border, inside a building and others (outside buildings).
  2. Based on individual pixel classification, generate candidate polygons that may contain buildings.
  3. Evaluate polygon candidates in order to select those with a confidence above a given threshold; discard remaining polygons.
Figure 1: wleite implementation showing candidate polygons (in red)

The 2nd place entry was submitted by a Polish TopCoder Marek.cygan. His implementation produced a final score of 0.245420 by using the following workflow:

  1. Classify pixels into 3 categories: inside, outside, and border.
  2. Use Convolution layers in a neural network to estimate heat map (similar to CosmiQNet approach).
  3. Convert heat map back into building footprint rectangles.
Figure 2: marek.cygan implementation.

The 3rd place finalist was TopCoder qinhaifan from China with a final score of 0.227852. His entry is based on MNC (Multitask Network Cascade), an image segmentation approach presented at CVPR 2016. A TopCoder from Japan, Fugusuki, placed fourth with a score of 0.216199. His approach was developed using a convolutional neural network with the Keras framework and cluster code from the Object Detection on SpaceNet post. The fifth place finalist was TopCoder bic-user from Ukraine with a score of 0.168605. He also uses the Keras framework and references a StackExchange post as inspiration. His implementation performs a novel technique using scikit-image, scikit-learn, and shapely to post-process images by fitting rotated polygons.

New Cities: Las Vegas, Paris, Shanghai, Khartoum

New Competition and Data

With regard to the new satellite imagery, we plan to host another building footprint extraction contest in the next 30 days. Please monitor TopCoder for the announcement as well. SpaceNet on AWS now contains imagery and building footprints for Las Vegas, Paris, Shanghai, and Khartoum. This release includes 3,800 km²of additional imagery with 181,619 footprints. We believe the new data is qualitatively better than the Rio release in a few important aspects:

  • Higher resolution imagery from DigitalGlobe’s WorldView-3 Satellite (30cm GSD vs 50cm GSD in the Rio release)
  • Single strip images for each city. The Rio imagery comprises a mosaic of images that introduced different lighting conditions, elevation angles, etc.
  • Consistent footprint labeling methodology (Rio labeling was multi-source)
  • Additional imagery formats: Panchromatic, 3-Band Pan-sharpened, 8-Band Pan-sharpened
Areas of Interest: Las Vegas, Paris, Shanghai, Kartoum

Conclusions

The contest implementations revealed that automated footprint extraction remains a challenging problem. We learned that pre- and post-processing techniques are as important as the choice of machine learning framework. We look forward to seeing how the introduction of greater geographic diversity with varying building standards and construction materials will impact footprint extraction quality.

--

--