The Global Facility for Disaster Reduction and Recovery (GFDRR) is partnering with Azavea and DrivenData to introduce a new dataset and machine learning (ML) competition ($15,000 in total prizes) to improve mapping for resilient urban planning. …
by daveluo (on github and elsewhere)
In this post and the accompanying Google Colab notebook, we’ll learn all the code and concepts comprising a complete workflow to automatically detect and delineate building footprints (instance segmentation) from drone imagery with cutting edge deep learning models.
All you’ll need is a Google account, an internet connection, and a couple of hours to learn how to make the data & model that learns to make something like this:
You open Google Maps and enter “coffee” to find shops nearby. The app proceeds to download a map of your entire city at the highest detail. You wait minutes and 100s of MBs download to your phone before 4 or 5 location pins drop closest to you.
If the app did this every time you’re in a new area or search for something different, you would probably stop using it. This scenario is extreme, even absurd, yet we often do something similar with geospatial data in deep learning.
We download full-sized satellite or aerial imagery (at 100s of MBs to GBs per image or per band), crop, resize, & tile them to the areas, sizes, & formats we need, and run our model training or inference on the end product while holding a relatively large portion of the source data unused. …