SIMRDWN: Adapting Multiple Object Detection Frameworks for Satellite Imagery Applications

Adam Van Etten
Oct 25, 2018 · 3 min read

If the myriad challenges of finding small objects in overhead imagery makes you anxious, we invite you to take a deep breath, relax, and simmer down.

Rapid detection of small objects over large areas remains one of the principal drivers of interest in satellite imagery analytics. A number of previous blogs [1, 2, 3, 4, 5, 6, 7, 8] discussed the YOLT algorithm, which modifies YOLO to rapidly analyze images of arbitrary size, and improves performance on small, densely packed objects. YOLO is just one of many advanced object detection frameworks, however, and algorithms such as SSD, Faster R-CNN, and R-FCN merit investigation as well.

To this end we introduce the Satellite Imagery Multiscale Rapid Detection with Windowed Networks (SIMRDWN) framework. SIMRDWN (phonetically: [SIM-er] [doun]) combines the scalable code base of YOLT with the TensorFlow Object Detection API, allowing users to select a vast array of architectures to apply towards bounding box detection of objects in overhead imagery.

Comparison between Faster R-CNN, R-FCN, SSD, and YOLT with a dataset of aircraft, boats, cars, and airports revealed that the YOLT implementation has both the highest mean average precision (mAP = 0.68), and the fastest inference speed (a minimum of 0.44 square kilometers per second).

In this post we illustrate some of the outputs of SIMRDWN for various architectures trained to find vehicles such as aircraft, boats, and cars. Bounding box labels are better suited for vehicles than building footprints; nevertheless, we also explore the performance of SIMRDWN models on building footprint detection on the recently released SpaceNet Off-Nadir Dataset over Atlanta, Georgia.

In future posts we will explore how the SIMRDWN framework helps inform a number of satellite imagery research areas, such as super-resolution. For the time being, we encourage interested parties to inspect the images below, explore the codebase at github.com/cosmiq/simrdwn, or peruse our arXiv paper (to appear in WACV 2019) for further details.

Figure 1. Building inference results of a SIMRDWN model trained on the SpaceNet Atlanta Dataset. [Imagery courtesy of SpaceNet]
Figure 2. Example inference with an SSD model trained on aircraft, boats, and cars. We display true positives as green boxes, ground truth boxes for true positives in blue, false negatives in yellow, and false positives in red. We define a true positive detections as having an IoU greater than or equal 0.5. [Imagery courtesy of DigitalGlobe]
Figure 3. Faster R-CNN model trained to detect aircraft. A number of false positives are present. [Imagery courtesy of DigitalGlobe]
Figure 4. RFCN model with the same color scheme as Figure 1. [Imagery courtesy of DigitalGlobe]
Figure 5. YOLT model showing detected aircraft in a Brazilian airport. [Imagery courtesy of DigitalGlobe]
Figure 6. YOLT model showing detected cars over Salt Lake City. [Imagery courtesy of DigitalGlobe]

Adam Van Etten

Written by

The DownLinQ

Welcome to the official blog of CosmiQ Works, an IQT Lab dedicated to exploring the rapid advances delivered by artificial intelligence and geospatial startups, industry, academia, and the open source community

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade