Introducing LepSnap

Image Recognition for Moths & Butterflies

There are over 175,000 species of moths and butterflies (Lepidoptera, “lep” for short) worldwide. Although some species are easy to recognize in photos, the majority can be quite challenging to ID. Some leps look nearly identical to sibling species, while others can be identified only by examining specific anatomical features. There are so many species in North America alone (~30k), no single paper-bound field guide can include all known Lepidoptera, even when focusing on one geographic region.

So we’ve set out to take on this identification challenge by writing software that reimagines the field guide in a world dominated by smartphones. LepSnap is a “community field guide” dedicated to cataloging all species of Lepidoptera. It’s a smartphone app and web platform that uses the latest image recognition AI (Artificial Intelligence) to give people a jumpstart on their moth, butterfly and caterpillar identifications…a digital field guide that learns from the community it serves.

LepSnap is designed for three audiences:

The Curious
Ever wonder what kind of big, fuzzy moth just landed on your window sill? (Us too.) Just snap a photo with your phone and — shazam!—LepSnap analyzes the photo and quickly suggests visually similar species. When you publish a photo on LepSnap, other community members can verify or correct your ID. The limitation with image recognition technology is that it is only as good as the images it’s trained on. As of this post, LepSnap is well-trained to recognize commonly-encountered North American moths and butterflies. Learning to recognize all 175,000+ species worldwide will take some time...but we are resolute on getting there!

The Naturalist
As naturalists, we can all play a role in advancing scientific knowledge of species ecology simply by recording and sharing our observations. LepSnap lets you submit a “research grade” record, which entails publishing photos that are accompanied by date/time/locality information so the record has value to the scientific community. LepSnap sends all research grade field observations (photos of living moths/butterflies) to the Lepidoptera of North America Network “LepNet” data portal, and photographs of collection specimens are contributed to iDigBio, a natural history data aggregator.

The Specialist
LepSnap serves Lepidopterists in three ways: 1) its image recognition AI eases the burden of helping others identify common or distinct species — freeing up time to focus on teaching how to identify; 2) it makes it easy to use a smartphone to digitize collection specimens; 3) LepSnap is a new avenue for showcasing museum collections. We’ve built deep integration with the specimen data management platform Symbiota (namely its SCAN data portal), so if a moth/butterfly record is published on SCAN, it will automatically appear on LepSnap.

How much progress have we made so far? We’ve launched an early beta version of LepSnap for iPhone on the App Store, and we are almost finished with the beta version for Android. For those who do not have an iPhone, you can publish from your desktop computer (Chrome/Safari browser recommended) at LepSnap.org.

LepSnap is, and will always be, a free, non-commercial public good.

We’re just getting started 🚀
LepSnap is very young, with all the naiveties that come with an application that has limited user feedback. We are calling on all lep-lovers to help us better train LepSnap’s image recognition AI, and would love to hear your feedback & ideas on how we can make the app a richer experience…and we would love to partner with you on image training and ID curation. Just say hello@fieldguide.net

Download LepSnap for iPhone

A short tutorial about how to publish on LepSnap with an iPhone
If you don’t have an iPhone, you can still publish on LepSnap.org (Android version coming soon)

A big thank you 🙏
LepSnap could not exist without the tireless work of Taylan Pince and his amazing team (Fergal, Cem, Marco, Omurden, Ömer), the continual guidance and support of Neil Stanley Cobb and Benjamin Brandt of Symbiota/SCAN/LepNet, and the computer vision expertise provided by Grant Van Horn of Caltech’s Computational Vision Lab.

Snap away! 🐛🦋