Deploying the SpaceNet 6 Baseline on AWS

Adam Van Etten
The DownLinQ


Adam Van Etten and Nick Weir

Preface: SpaceNet LLC is a nonprofit organization dedicated to accelerating open source, artificial intelligence applied research for geospatial applications, specifically foundational mapping (i.e. building footprint & road network detection). SpaceNet is run in collaboration with CosmiQ Works, Maxar Technologies, Intel AI, Amazon Web Services (AWS), Capella Space, Topcoder, IEEE GRSS, and the National Geospatial Intelligence Agency (NGA).

The SpaceNet 6 Challenge asks participants to extract building footprints from a multimodal remote sensing dataset comprising both synthetic aperture radar (SAR), as well as electro-optical imagery. The challenge is ongoing, with over a month remaining until the May 1 deadline. Deep learning models have shown great promise in past SpaceNet challenges, but require the use of dedicated GPU hardware that not all parties may have access to. In this post we detail the simple steps required to train and test the SpaceNet 6 deep learning baseline model on an AWS GPU instance for less than the cost of a tank of gas. The SpaceNet partners have extended $500 in compute credits to competitors who achieve a performance threshold equivalent to the baseline model, thereby enabling extensive experimentation for this challenge.

I. Introduction

The rapid advance in computer vision has been driven by a number of features, but graphical processing unit (GPU) computing has been key. The downside is that not everyone has access to powerful GPUs. The SpaceNet 6 baseline (as with previous SpaceNet Challenges) relies upon a deep-learning architecture that requires GPUs to run in a timely manner. If one does not have their own personal GPU computing platform, one option to get started is to use cloud GPU instances. Here we explore the Amazon Elastic Cloud Compute (EC2) offerings for GPU computing.

To reduce the activation energy for beginning exploration of the SpaceNet 6 data and baseline, the CosmiQ team built an Amazon Machine Image (AMI) pre-loaded with the Solaris software suite, SpaceNet 6 baseline algorithm, and SpaceNet 6 dataset. We will use this AMI to train and test the baseline model. AWS, while powerful, can appear rather byzantine to the uninitiated. In the following sections we provide a detailed step-by-step procedure for firing up your own GPU-enabled instance.

II. Loading the AMI

In order to load and prepare the AMI for model training and testing, simply execute the following steps:

  1. In a web browser navigate to:

If you don’t have an AWS account, create one as a “Root user” and login.

2. In the top-right of the page, ensure you are in the “N. Virginia” region:

3. Select “Launch Instance” (orange button)

4. Search for the pre-built AMI

Type “CosmiQ_SpaceNet6_Baseline_v2” in the search bar and hit “Select”

5. Select the appropriate instance

We recommend the p3.2xlarge instance, which includes one NVIDIA P100 GPU and costs $3.06 per hour. Hit “Review and Launch.”

6. Initiate launch

7. Create a new Key Pair

Create a new key pair (e.g. cosmiq_sn6_baseline), and save to your local machine. Hit “Launch Instances.”

Note: It is possible that AWS may throw an error stating that you have requested more vCPU capacity than allowed (this is an intentional feature put in place to keep users from unwittingly accumulating large bills). In this case, open a ticket for a service limit increase at:

8. Record the address

The instance is now running! Back at the Instance Dashboard you should write down the Public DNS (IPv4) for this instance (of form:, you will need this later to access the instance.

9. Access the instance via SSH

On your local machine, ssh into the AMI from a terminal. You may need to change permissions of the key (.pem) file first.

chmod 400 /path_to_keys/cosmiq_sn6_baseline.pemssh -i “/path_to_keys/cosmiq_sn6_baseline.pem”

III. Train and Test the Baseline Model

The GPU instance is now up and running, so we can train and test the SpaceNet 6 baseline algorithm, which is pre-installed in a Docker container (entitled cqw_sn6_bl) in the AMI. The command below should be run in the ssh terminal of your instance.

  1. Attach the docker container
docker start cqw_sn6_bldocker attach cqw_sn6_bl

This puts us in the the /root directory within the docker container where we can execute the and scripts.

2. Train the model

By default the script runs for a generous 200 training epochs (~10 hours), though in most experiments it converges in less than half that time. To edit the training time simply edit the number of epochs in The training script first pre-processes the data, then applies the VGG-11 + U-Net deep learning model. Training is launched by simply invoking the script:

time ./ train/AOI_11_Rotterdam/

Training will proceed either until completion of all epochs, or training is terminated by the user (ctrl+c). The best model is saved as training progresses, so early termination still saves the best model trained to date.

3. Test the Model

Testing can be executed either when training is complete (Step 2 above), or using the existing weights in the /root/weights/ directory in the docker container. Either way, pre-processing the testing data is the slowest step, then GPU inference proceeds rapidly. Testing is invoked with the following command (sn6_baseline_predictions.csv is the name of the output file):

time ./ test_public/AOI_11_Rotterdam/ sn6_baseline_predictions.csv

4. Copy outputs back locally

Inference is now complete, so we now transfer the outputs out of the AMI back to our local machine. First, copy the results from the docker container back to the AMI:

docker cp cqw_sn6_bl:/root/weights/ /home/ubuntu/src/cosmiq_sn6_baseline/docker cp cqw_sn6_bl:/root/inference_binary /home/ubuntu/src/cosmiq_sn6_baseline/docker cp cqw_sn6_bl:/root/inference_continuous /home/ubuntu/src/cosmiq_sn6_baseline/docker cp cqw_sn6_bl:/root/inference_vectors /home/ubuntu/src/cosmiq_sn6_baseline/docker cp cqw_sn6_bl:/root/sn6_baseline_predictions.csv /home/ubuntu/src/cosmiq_sn6_baseline/

Now copy results back to your local machine (for example, to location: /path_to_results/sn6_baseline_aws/):

scp -i "/path_to_keys/cosmiq_sn6_baseline.pem" -r  /path_to_results/sn6_baseline_aws/scp -i "/path_to_keys/cosmiq_sn6_baseline.pem" -r* /path_to_results/sn6_baseline_aws/

5. Inspect Results

The SpaceNet 6 baseline creates inference masks for the test set, as displayed below.

SAR test images (left) and SpaceNet baseline building footprint predictions (right).

The baseline also creates a csv file with the geometries of each prediction building (see below) — this file yields a score of ~0.21 when entered onto the challenge website.

SpaceNet baseline building footprint coordinates in challenge entry format.

IV. Conclusions

While creating a deep learning model to extract building footprints from highly off-nadir SAR imagery remains quite challenging, the CosmiQ team has striven to lower the barrier for entry to get started on this task. Accordingly, we built the CosmiQ_SpaceNet6_Baseline_v2 Amazon Machine Image (AMI) pre-loaded with our open source SpaceNet 6 baseline algorithm. This post detailed the straightforward steps to spin up this image and train/test the baseline model for a cost of only ~$30 even for a very generous number of training epochs. Recall that $500 GPU credits are available to challenge participants as well. We encourage experimentation with and enhancement of this baseline model for the ongoing SpaceNet 6 Challenge.