Improved models lead to improved maps

Google Earth
Google Earth and Earth Engine
4 min readNov 21, 2022

By Tanya Birch, Senior Program Manager, Google Earth Outreach

California Dept. of Fish and Wildlife studies bobcats and other wildlife and processes images in Wildlife Insights

Today, we’re announcing an AI model upgrade to Wildlife Insights, a cloud platform for biodiversity monitoring. We’ve increased the model’s ability to isolate images that aren’t of wildlife, specifically blank and human images. With this addition and a prior model release earlier this year, our blank image identification passed a critical bar, catching 88% of blanks with an error rate of less than 2%, compared to 68% with an error rate of 8% previously*. We also improved species classification by 26% on our evaluation set, and grew the number of species our AI model identifies to 1295 species around the world.

Wildlife Insights leverages AI, Google Maps and Google Cloud Platform to quickly store, manage, and analyze camera trap data. This enables researchers to better determine where species live so they can propose measures to better protect them. This past year was momentous for Wildlife Insights, marking the first year the technology has been available to the general public.

Since its launch in July 2021, users of Wildlife Insights have shared millions of camera trap images from all over the globe.

Discarding images without animals

Blank images, which are images without wildlife, can constitute up to 80% of images collected by cameras. By automatically identifying these, Wildlife Insights saves researchers from the tedious task of reviewing unhelpful images. With today’s model update, biologists can find images of interest much faster.

Mule deer pose frequently for the cameras in Wildlife Insights

Finding the most common species (including homo sapiens!) better

In addition to filtering out blanks, it’s also important that we confidently predict the most common species in images. One of these common species are humans, since those are not intended for public access; it’s critical that users can trust the system to find images with humans reliably. Improvements in identifying the most common species in images are in part due to the increase in training data from 9M to 35M across 1,295 species, thanks to expert wildlife researchers contributing labeled images, offering better coverage of common species across various regions.

California Department of Fish & Wildlife consolidated images from thousands of cameras across the state on Wildlife Insights. Each map cluster represents the number of camera deployments over a defined time period and geographic area and helps CDFW make land-use planning decisions.

California species are now on the map

One of our users, the California Department of Fish & Wildlife (CDFW), uses Wildlife Insights for consolidating all the state’s historical and current wildlife data from 10,000+ camera traps, nearly 30 million images and growing. One CDFW project, for example, deployed hundreds of camera traps across California over the last two years to study bobcats. Once the images were retrieved from the cameras, they were uploaded onto the cloud platform for processing, which is being done with the help of close to 100 volunteers. According to CDFW Researcher Lindsey Rich, the AI model released in spring 2022 has really helped to expedite data processing time, as the model is reliably identifying images without animals (‘blanks’). By centralizing their camera trap data on Wildlife Insights, CDFW can now capitalize on the wealth of camera trap data in the state and apply this information to the management of California’s diverse wildlife resources.

Wildlife biologists use data from camera traps to inform land management decisions.

Improved models lead to improved maps for land managers

Increasingly, camera trap data are critical to inform decisions made by land managers. Not only is it critical to observe wildlife directly using camera traps to understand species ranges and habitat occupancy, camera traps also support the science for mapping wildlife migration. Camera data can be used with satellite imagery and other environmental data to monitor and project species’ habitats, providing conservation planners with the most recent data on current and projected future habitat.

One such research effort, TerrAdapt, models habitat suitability and species range shifts with the help of Google Earth Engine, using camera trap data and data from wildlife collars to validate habitat suitability and connectivity models.

Because camera trap data provides not only wildlife presence data but also absence data, it’s integral to understanding whether a conservation management decision is effective or not, and if wildlife are truly living in an area that the habitat models project is suitable.

TerrAdapt monitors and projects future habitat for species and ecosystems, projected habitat range shift for montane dry forest shown for the period 1990–2100; cool colors show areas of habitat gain, warm colors show areas of habitat loss, and green areas represent stable climactic refugia.

Spatially explicit monitoring of biodiversity and habitat informs decisions on the ground for how people can coexist with wildlife in a warming world, make well-informed land-use decisions, and increase ecological resilience to climate change risks. As countries and regions make plans around protecting 30% of the planet by 2030 (known as 30x30), spatial data is essential for consensus-based land-use decision-making across geographies.

Learn more about our AI models, contribute your data through Wildlife Insights to help improve the models, watch a 3 minute Lightning Talk from Geo for Good 2022, and explore the world’s largest collection of publicly available camera trap images.

*We are reporting blank recall and precision on our out-of-domain unseen evaluation set. Users will experience variation on their camera trap projects depending on their region and species of interest, and whether those are represented in our training dataset.

--

--