Computer vision helps find parking lots to save wildlife

Google Earth
Google Earth and Earth Engine
6 min readAug 31, 2020

By Michael Evans, Senior Conservation Data Scientist Center for Conservation Innovation, Defenders of Wildlife

If you’re brainstorming ways to help wildlife co-exist safely with humans in our urban and suburban environments, “parking lots” might not be high on your list of solutions. But parking lots are key for local governments trying to balance wildlife habitat protection with renewable energy sources. As it turns out, parking lots are the perfect place for solar energy panels that might otherwise sit in open spaces that wildlife need to survive. If we place solar panels in parking lots, wildlife keep their homes, and we benefit from clean energy.

But if you want to find spaces that are feasible alternatives for solar panels, you need an efficient way to identify parking lots that doesn’t involve driving all over a metropolitan area. Our organization, Defenders of Wildlife, is a U.S.-based nonprofit dedicated to the protection of native wildlife and their habitats, and one of our focus areas is advancing the development of renewable energy sources like wind and solar in a wildlife-friendly manner. This means both identifying low-impact sites — those that require minimal alterations of natural landscapes — for new renewable energy production as well as thinking about ways that current sites might be used to benefit wildlife.

Recently, Defenders worked with The Nature Conservancy (TNC) on two such projects that used Google Earth Engine and the TensorFlow deep learning library to develop custom maps from satellite imagery.

Finding low-impact sites for solar installations

The Long Island Solar Roadmap, a collaboration between Defenders and TNC in New York, aims to advance the pace of solar energy installations on Long Island by identifying low-impact sites and reducing siting conflict. One of the primary target areas for development is parking lots. Similarly, TNC’s North Carolina chapter is working to promote low-impact solar siting and wanted to develop an up-to-date map of all existing ground-mounted solar arrays in the state. Each of these projects required mapping specific objects on a landscape: parking lots and ground-mounted solar arrays.

The growing field of computer vision entails the automated identification of objects in images. These techniques use deep-learning models to teach computers to recognize and locate things like cats, cars, and faces in photographs. With the integration of Google Earth Engine and TensorFlow, we can apply these same techniques to satellite images and automate the mapping of specific objects on a landscape. Our interest was not only locating parking lots and solar arrays, but delineating the boundaries of these features — a computer vision task known as image segmentation. Image segmentation models identify the shape of objects by assigning each pixel of an image to a certain category.

To train an image segmentation model, we need example images where the objects we want to delineate are labeled. Margaret Fields, GIS Manager of TNC in North Carolina, provided us with 663 polygons delineating the footprints of ground-mounted solar arrays in North Carolina as of 2016. We received 645 hand-digitized parking lot boundaries in the town of Huntington, N.Y. from Karen Leu, GIS Specialist with TNC in New York.

Together, we used these polygons to label the pixels of different satellite images that represent solar arrays or parking lots, giving the models data on which to make predictions (the spectral data in the images) and labels to determine the accuracy of those predictions.

We used two different but related image segmentation approaches, and different sources of imagery for each task. To map existing solar arrays, we trained a U-Net model using multispectral imagery from the Sentinel-2 satellite system. Our workflow was based largely on the demo provided in the Google Earth Engine GitHub repo.

We used Earth Engine to create a one-month, cloud-free mosaic of Sentinel-2 images covering North Carolina in January 2016 containing the blue, green, red, near infrared, and the shortwave infrared 1 and 2 bands. We standardized the Sentinel-2 mosaic so that each band was on a 0–1 scale using the 99th percentile of each band as the maximum value. This image, along with the solar array labels, were then sampled as 256 x 256-pixel image chips, which were fed into the U-Net model.

The generally smaller size of parking lots necessitated higher resolution imagery to train a useful image segmentation model. We used National Agricultural Imaging Program (NAIP) imagery, which contains 1 m2 pixels, to precisely map parking lots across Long Island. The finer spatial resolution of NAIP data came with a tradeoff in both recency and spectral resolution. NAIP images are collected every two years per state, and the most recent image covering Long Island was from 2016. Additionally, NAIP images record only the blue, green, red, and near-infrared portions of the electromagnetic spectrum. Although these four bands provide less spectral information than Sentinel-2, the data structure allowed us to use a model that has been pre-trained on millions of photographs from the ImageNet collection.

DeepLab v3 is one of the state-of-the-art image segmentation models developed to delineate objects in photographs. The models take 3-band input (usually RGB) and 512 x 512-pixel image chips. Photographs typically record red, green, and blue values on a 0–256 scale, which is the same as NAIP, so we did not need to rescale these images.

Generating positive examples for image learning

Because solar arrays and parking lots are relatively sparse across their respective landscapes, we took two steps to ensure our models had enough positive examples from which to learn to recognize these features. First, we constrained the spatial extent of sampling to areas within 5 kilometers of the digitized features. Second, we added the centroids of digitized polygons to the collection of sampling points. These points were then used to create the image chips used to train the U-Net and DeepLab models. A little more than 600 examples represent modest training data sets at best, so it was crucial to implement image augmentation to artificially increase the variability of images on which these models were trained.

Photovoltaic solar array with native vegetation on Long Island. [Photo credit: Jessica Price]

We trained both U-Net and DeepLab models using Colaboratory notebooks. This provided a cloud-based Python computing environment already configured to run TensorFlow. By installing the Earth Engine Python API into the notebook, we were able to process imagery, sample training data, and start training models quickly and easily.

We showed each model 50 epochs of training data and used TensorBoard to visualize loss and accuracy metrics at the end of each epoch. Using Keras callbacks, we saved the weights from the model that performed best during training in terms of intersection over union (IoU), which measures the degree of overlap among two sets of shapes. Both models achieved a maximum IoU of 80 percent on validation datasets, at which point we felt confident enough to make predictions and examine the output. The code used for these analyses is available here.

Example of predictions of ground-mounted solar arrays in North Carolina made by the trained U-Net model [left] | parking lots from the trained DeepLab v3 model in Huntington, N.Y. [right]

Our outputs yielded 55 confirmed new solar arrays in North Carolina and 2,942 parking lots in the town of Huntington. You can check out the results of these models through our Google Earth Engine app.

Our next step is to apply these models elsewhere. The solar roadmap is using a hand-digitized map of parking lots on Long Island for siting, and the trained DeepLab model is now being applied to the entire state of New York to quickly map parking lots for low-impact solar development statewide. We are also beginning a project to understand how restoring native vegetation at solar sites might benefit pollinators in different states — meaning we will need to map the locations of existing solar in these places.

While these are two of Defenders’ early projects using computer vision, we are excited that both could open doors for more opportunities to help facilitate development of low-impact renewable energy development.

--

--