Steven Ramkumar | Pinterest engineering manager, visual search & discovery engineering
Last month we announced Lens BETA, a new way to discover objects and ideas from the world around you using the camera in your Pinterest app. Since, we’ve been rolling out the BETA to millions of Pinners and getting positive early feedback. As we expected, people are largely Lensing objects related to fashion, home decor and products, as well as other interests like art and food. Because we’ve seen a tremendous response, today we’re launching Lens BETA to all Pinners in the U.S. to try on iPhone and Android, with an updated visual model and new product enhancements.
Lens BETA technology enhancements
The visual features (also referred to as visual embeddings) we use to represent images have to be optimized to map Lens queries–which often contain suboptimal lighting and framing conditions–to the the billions of high-quality images we have on Pinterest. The key to solving this domain shift problem involves training a joint-embedding model where we use the right mix of in-the-wild camera images and high-quality stock photography. As part of today’s update, we’re rolling out a new visual model which is better optimized for user generated camera images. Ultimately, Lens is constantly improving as more and more people use it.
Lens BETA product updates
Along with these technical changes, you’ll also notice two new features as you use Lens. First, we’ve made it possible for Pinners to tag objects they take photos of. This allows Pinners to be a part of building Lens, and Lens to learn from real data.
In addition to Lensing photos from your camera and camera roll, we’ve added a way to discover more ideas you may not have known about. Just tap the Lens icon in the Pinterest app and swipe up to find new idea lenses to try, from turntables to travel ideas.
The BETA launch of Lens is really just the start. We’re continuing to improve our visual technologies to better understand images and objects, as we face challenges where the image is the only available signal we have to understand a Pinner’s intent. This is especially difficult in the case of real-world camera images as people take photos in a variety of lighting conditions with inconsistent image quality and various orientations.
We’re excited by the possibilities that objects and visual search together can bring and are continuing to explore new ways of using our massive scale of objects and images to build new discovery products for Pinners around the world.
Be sure to update your Pinterest app for iPhone to V6.20 and your Android phone to V6.10. If you’re interested in tackling these computer vision challenges and building awesome products for Pinners, please join us!
Acknowledgements: Lens is a collaborative effort at Pinterest. We’d like to thank Maesen Churchill, Jeff Donahue, Shirley Du, Jamie Favazza, Michael Feng, Naveen Gavini, Binghui Gong, Andreas Helin, Jack Hsu, Alanna Iverson, Yiming Jen, Jason Jia, Eric Kim, Dmitry Kislyuk, Christina Lin, Byron Parr, Vishwa Patel, Albert Pereta, Steven Ramkumar, Eric Sung, Evany Thomas, Eric Tzeng, Kelei Xu, Mao Ye, Zhefei Yu, Cindy Zhang, and Zhiyuan Zhang, Trevor Darrell for his advisement, Yushi (Kevin) Jing, Vanja Josifovski, Omar Seyal, and Evan Sharp for their support.