Preface: SpaceNet LLC is a nonprofit organization dedicated to accelerating open source, artificial intelligence applied research for geospatial applications, specifically foundational mapping (i.e., building footprint & road network detection). SpaceNet is run in collaboration by co-founder and managing partner, CosmiQ Works, co-founder and co-chair, Maxar Technologies, and our partners including Intel AI, Amazon Web Services (AWS), Capella Space, Topcoder, IEEE GRSS, the National Geospatial-Intelligence Agency and Planet.
The SpaceNet 6 challenge is officially in the books, and today we are proud to announce our winners! In the challenge, participants were asked to automatically extract building footprints with computer vision and artificial intelligence (AI) algorithms using two unique modalities of very-high resolution remote sensing data: synthetic aperture radar (SAR) from Capella Space and electro-optical satellite imagery from Maxar. Our area of interest for this challenge was centered over the largest port in Europe: Rotterdam, the Netherlands.
The SpaceNet 6 dataset is novel and the first permissively licensed dataset containing over 100 km² of very high-resolution half-meter SAR imagery. SAR sensors are unique as they can penetrate clouds and work in any illumination setting (day or night). As such, these sensors could be particularly valuable in a disaster-response scenario where cloud cover and adverse conditions often limit the value of traditional optical imagery. The SpaceNet 6 SAR imagery is collected via a test run of the Capella Space SAR sensor via an aerial platform. The same area of Rotterdam was imaged in 204 passes over 3 days, resulting in a dense stack of SAR data. Following these collects, just 8 days later, the Maxar WorldView 2 sensor passed overhead collecting a pristine image of Rotterdam. Together, along with our annotations of building footprints, these two remote sensing components form the basis of the SpaceNet 6 dataset and challenge.
Remember, that the training dataset contains both optical and SAR data, while the testing sets contain only SAR data. We structure the dataset in such a way to mimic real-world scenarios where historical optical data may be available, but concurrent optical collection with SAR is often not possible due to inconsistent orbits of the sensors, or cloud cover that will render the optical data unusable. If you’re unfamiliar with any other aspect of the SpaceNet 6 challenge, or if you want to read further, check out the other blogs in this series below, or read our paper, set to be released as part of the CVPR EarthVision proceedings in a mid-June 2020:
- Announcing SpaceNet 6: Multi-Sensor All Weather Mapping
- SpaceNet 6: Dataset Release
- The SpaceNet 6 Baseline
- Deploying the SpaceNet 6 Baseline on AWS
- SpaceNet 6: Multi-Sensor All Weather Mapping Dataset
The dataset remains openly available for download via the AWS Open Data program.
Overall, the SpaceNet 6 challenge featured over 1,600 submissions and 411 registrants, making it our most competitive challenge to date. This post will mark the first in the series of posts as we begin to investigate what worked, what didn’t and what’s coming next. Without further ado: let’s jump right in.
The SpaceNet 6 challenge crowns a new champion: zbigniewwojna. A summary of the winners’ results alongside the baseline model are in the table below:
Our graduate winner is the __EVER__ team and our undergraduate winner goes to ikibardin. The margin of victory (2.4 points) by zbigniewwojna is actually the second largest in SpaceNet history, trailing only XD_XD’s win (5.0 points) from SpaceNet 2. XD_XD was the first in the challenge series to employ an ensemble of deep-learning models. This approach has since been the standard technique that has been used by our winners from SpaceNet 2 through 6. The winners were able to make significant improvements over the baseline and create their own unique solutions.
Although the winning score is in the low 40’s, this is actually comparable to the results of the SpaceNet 4 challenge from very off-nadir look angles. The SAR data for SpaceNet 6 is also collected from an off-nadir perspective averaging* ~35°. These oblique perspectives are one of the more difficult aspects present in earth-observation and algorithms still have a long way to go to contend with them. Additionally, for this challenge, precision scores are actually much higher than recall scores. Overall, one out of every two proposed building footprints was a false positive and two out of every three buildings was missed by the winning algorithms. These numbers highlight the challenges of entering into a new modality and showcase that further research is required to fully exploit the SAR domain.
Model Architectures and Approaches
We summarize the winners algorithmic approaches in the following table:
- EfficientNet Dominates: Each of the top 5 all used ensembles of neural networks, with four of the five relying on slight variants of the newly introduced EfficientNet. EfficientNet has achieved state-of-the-art performance on ImageNet while being markedly smaller and faster than other state-of-the-art convolutional neural networks.
- Lower training and inference times: Overall, both the training and inference times were greatly reduced vs. past challenges. Some of that is due to the efficiencies of networks improving, another aspect of this is simply that this is a smaller dataset than some of the previous SpaceNet challenges. Zbigniewwojna’s model was both the best performing and the fastest at inference (~5.4 s/km² on an AWS p3.8xlarge).
- Optical pre-training not always required: Only two of the five winners leveraged the SpaceNet 6 optical data in any way, instead finding that ImageNet pre-trained weights provided the same performance boost. SatShipAI trained all of their models on PS-RGB data before training on SAR and Motokimura trained 1/3rd of his ensemble on PS-RGBNIR data before switching to SAR.
- Multi-Channel Masks: The trend of using multi-channel masks that denote building interiors, edges, and contacts between buildings continued from prior SpaceNets, and were used by four of the five winners. Only SatShipAI skipped the multi-channel approach, instead focusing on only refining semantic segmentation outputs from model into binary predictions.
- Other Encodings: Each of the participants learned quickly that the direction of collection of the SAR data (North or South facing) was critical for improving model performance. Several winners also encoded the unique ID of each of the 204 SAR image strips and fed this information into the network as well. Notably, networks still could not automatically learn this information directly from the images. These approaches show that sometimes even minor pre-processing steps can be particularly valuable to improve performance and minimize some of the inherent complexities of overhead observation.
What’s coming next?
The SpaceNet team will be virtually attending CVPR EarthVision and will have two keynote slots: 11:40 PDT AM and PM on June 14th, 2020. In our keynote(s) we will be breaking down the challenge, the dataset, and the algorithms. Watch our social media (@Spacenet_AI), the EarthVision website, or the SpaceNet.AI website for live updates. Also coming up, we will have a series of blogs focused on this challenge as we begin to break down where these algorithms succeeded and what caused them to struggle. Finally, we will be open sourcing the top 5 algorithms on the SpaceNet Github repository in the coming weeks. Congratulations again to the winners and lookout of an upcoming announcement on SpaceNet 7!
*Note: This blog was updated from the initial version as SAR look angles were improperly cited as 55° rather than 35°.