Google Adds Brand-Friendly Features to Visual Search

And more on what brand marketers need to know from the 2022 Google I/O event

Richard Yao
IPG Media Lab
12 min readMay 12, 2022

--

Image credit: Google on YouTube

The 2022 tech conference season continues as Google kicked off its annual I/O developer event on Wednesday with a hybrid keynote session. Through the 2-hour-long presentation, Google introduced new software products and features, showcased its latest advances in AI research & development, unveiled the new Android OS along with a new affordable Pixel 6 phone, and teased a couple of upcoming hardware products, including a highly anticipated Google Pixel Watch that will launch this fall along with the Pixel 7 phones.

Amongst the deluge of announcements and debuts, the improvements made to Google Search and Google Assistant stood out as the most relevant for brands and marketers. Here’s everything future-forward brands need to know from this Google developer event.

Supercharging Google Lens with Hyperlocal Search and “Scene Exploration”

What Google Announced

Google is taking its main search product to the next level by adding a “near me” feature to its newly launched Multisearch feature, which seeks to combine visual search with text-based inquiries for a more layered search experience. This new addition supercharges Google Lens with the ability to find products available at local businesses nearby, making it a powerful hyperlocal discovery tool.

Users can take a picture of, for example, a handbag or a dish, search it via Google Lens and apply the “near me” filter to see where they can find that handbag or dish from a local store or restaurant. This feature will officially start to roll out later this year for English-language users.

Google’s search business started with text-based search, which has proven to be a highly profitable business from which Google derives much of its ad revenue. Over the years, Google has made strides in diversifying the search input, rolling out voice search via Google Assistant integration, and visual search via Google Lens. Google CEO Sundar Pichai shared during the keynote that new internet users tend to use voice search more, and that Google Lens is being used 8 billion times per month, marking a 3 times increase versus last year.

In addition to “Multisearch Near Me,” Google also previewed a new “Scene Exploration” feature, which allows users to scan a scene and search for a specific item within the scene. Described as “like having a ctrl+F tool for the world around you,” this new Google Lens feature will allow users to, say, scan a grocery store aisle and pinpoint the exact item they are looking for at-shelf via an additional text input that helps narrow down the search.

On stage, Google demoed this feature by showing a picture of chocolate bars in a store, before applying Scene Exploration to extract additional info, such as which products contain nuts or dark chocolate, their user ratings, and more. Google shared that in the future this feature will even highlight more product details like whether they are from a minority-owned business or sustainably sourced.

Unlike text-based search, Google has not quite cracked the code in monetizing its voice and visual search products. These two latest additions, however, point to a viable path for Google to make visual search more brand-friendly and monetize the increasing usage of Google Lens.

What Brands Need to Do

For brands, visual search has long presented a new customer acquisition channel that can drive product discovery and direct purchase. With both “Multisearch Near Me” and “Scene Exploration,” Google is presenting brands and retailers with a viable way to surface their products through Google Lens to aid discovery. Google has yet to make any mention or even allusion to paid search results in Google Lens, so all results will be surfaced organically. However, it does have sponsored spots showing up in Google Maps, allowing brands to showcase local stores with unique icons, which perhaps could be extended into sponsored “near me” search via Google Lens down the line.

The move from text-based search to visual search means that brands that wish to show up organically in the search results will need to prepare a visual catalog of their products and index them properly. The “near me” feature functions by matching the user’s visual input with images from retail catalogs, restaurant menus, and user-uploaded photos in Google Maps reviews. So the more images and other visual assets that businesses have in their Google Maps business profiles, the more likely their products are to show up organically in Google Lens.

As for Scene Exploration, brands and retailers should prepare by adding detailed descriptions of each product to their product pages. It may be helpful to think of this product as an AR overlay of product information gleaned from online stores that enhances the at-shelf experience and helps customers make a purchase selection. How to digitally enhance the brick-and-mortar experience has long been a hot topic among retailers, and this new Google Lens feature showcases a simple yet effective way to contextualize product information and combat choice paralysis.

Taking Street View in Google Maps to 3D Immersive View

What Google Announced

The “Multisearch Near Me” feature was not the only exciting feature Google is building on top of its Google Maps data. For years, the “Street View” feature has allowed people to vicariously walk down roads in cities they have never visited. Now, Google is adding 3D views and real-time satellite imaging to transform Street View into Immersive View.

This new feature would essentially generate a digital twin 3D model of the cities that Google is mapping, which, in turn, allows users to get a more contextual preview of the locations of their choosing, including information about its real-time busyness and surrounding traffic information. Google is also integrating 3D interior tours of local businesses into this Immersive View feature, thus extending Street View indoors. All of these data points and features were already present in Google Maps, and Immersive View brings all the threads of data input together and presents them visually and coherently.

Interestingly, similar to how most of the 3D interior tours of stores and restaurants on Google Maps are created by Google’s AI stitching a series of 2D images together, the images behind Immersive View are all computer-generated, using a combination of Google’s satellite captures and its Street View shots. At launch, this feature will only work in a few neighborhoods in San Francisco, New York, Los Angeles, London, and Tokyo, but Google says more are coming soon.

What Brands Need to Do

For brands, this Immersive View feature marks Google’s latest step in its quest to make Maps into a more immersive, live experience, which, in turn, should help further highlight local businesses and provide more contextual information for consumers. In order to achieve that, Google has been opening up its Maps ecosystem to third-party developers, which means brands can now tap into tools such as Live View AR to build experiences on top of Google’s mapping infrastructure, whether it is helping visitors navigate malls and stadiums, or simply find a parking space near their store locations.

During the entire keynote, Google mostly steered clear of mentioning AR, save for the quick demo of a real-time translation feature on some nondescript AR glasses by the end. But Immersive View points to Google’s larger ambition in making Maps a more immersive experience that could easily be transported into an AR interface down the road. Overlaying all the real-time satellite data onto the AI-generated 3D neighborhood map is undoubtedly laying the groundwork for more contextual computing and extending its AR features into the real world. In response, brands may need to consider how their business’s appearance in Google Maps will factor into their AR strategy.

Making Conversations with Google Assistant Easier & More Natural

What Google Announced

Besides improvements to visual search, Google also announced a few notable new features to Google Assistant that aim to make conversations flow more naturally. The new “Look and Talk” feature will allow users to activate Google Assistant by simply looking directly at Nest Hub Max and start talking, as if directing a request or question by making eye contact. Google said it uses face and voice matching to activate this feature, and all video footage captured for this purpose is processed on-device to ensure user privacy.

Image credit: Google on YouTube

Another similar feature allows users to set some customizable common inquiries (called “Quick Phrases”) so that these voice commands can be recognized without needing to say the “Hey Google” wake word first. Both features leverage ambient computing to make it easier to initiate conversations with its voice assistant by eliminating the need to repeat “Hey Google” over and over, as Google aims to facilitate more natural, spontaneous conversations between users and its voice assistant.

In addition to these two new features, Google also showcased its two Natural Language Process (NLP) models, LaMDA and PaLM, with on-stage demos to showcase the progress of its machine learning R&D. The LaMDA 2 was billed as the more advanced NLP model and the “most conversational AI yet” from Google, while the PaLM model can turn videos, images, multiple languages, etc. into one single understandable thing for AI. NLP capacity forms the basis for a lot of Google AI-driven software features, such as the new automated summary feature it added to Google Docs, or the live transcribing feature it is bringing to YouTube’s iOS and Android apps. It, of course, forms the foundation of Google Assistant and Google’s voice search product as well.

What Brands Need to Do

As voice assistant usage becomes more and more commonplace, having a voice strategy is now becoming table stakes for brands. For some, creating a branded voice experience can add utility value to the brand experience; for others, a voice experience can be a great engagement tool to build long-term relationships with customers. In addition, voice command is poised to become a key input method for a post-mobile future dominated by keyboard-less wearable devices, so as voice-based conversational interfaces continue to mature, brands should get ahead of the adoption curve and start exploring relevant use cases today.

Relaunching Google Wallet along with Virtual Cards

What Google Announced

Google’s mobile payment strategy has suffered from a bit of back-and-forth. Google Wallet was discontinued in 2015 shortly after Android Pay became the one-stop payment solution for the company. But at this I/O event, Google has decided to relaunch Google Wallet to handle things beyond digital payment, most notably identity authentication.

Less than a year after Apple announced it would be adding support for drivers license and state IDs in the iOS Wallet app, the search giant announced it is working with states and governments to add digital drivers’ licenses to the relaunched Google Wallet app, which will roll out in the next few weeks. Support for digital Covid-19 vaccination cards will be added too. Users will be able to share it via either NFC or QR code, so there’s no need to hand over your phone. In addition, Google Wallet will also support various digital passes, like Student campus IDs or Disneyworld passes, as well as digital car keys — all things that Apple Wallet already supports.

Besides resurrecting the Wallet app, Google announced a virtual credit card feature that will be available on Android devices and via the Chrome browser. As part of its moves to strengthen data security, Google will allow users to easily mask their debit or credit card numbers with a virtual one to reduce fraud risk in online shopping. Google said it is working directly with credit card networks (Visa, Mastercard, and American Express) to implement this capability. Apple offers a similar feature via its Apple Card, but not for other cards added to Apple Wallet.

What Brands Need to Do

Wallet apps are increasingly extending beyond simple payment and exploring new use cases such as authentication and investment. For brands that offer ticketed experiences, the emerging methods of identity authentication should be a top priority for integration to ensure a seamless admission experience. In addition, these digital authentication features could one day be merged with Google’s ambient computing efforts to allow for automated identity verification, which could be a game changer for brands in retail, travel, and hospitality, allowing them to provide a more personalized experience without explicitly asking for authentication of customer profiles or memberships.

Customizing Data Used for Ad Targeting via New Hub

What Google Announced

Data privacy continued to be a major talking point for Google, as it announced a series of new security features at the I/O event. Most notably, Google announced a new data control hub called ​​”My Ad Center,” which will allow anyone with a Google account to easily manage their ads privacy settings, and even choose to see ads from categories or brands you like. Set to roll out later this year, users will be able to opt in or out of categories such as energy industry, food and grocery, or hybrid alternative vehicles (e.g. EVs).

Similarly, Google also announced a new privacy feature that will make it easier for people to request the removal of their personal information (like addresses and phone numbers) from Google search results. It is also spearheading a host of “Protected Computing” features to strengthen user privacy by adding statistical noise to data and blurring it to de-identify the data as well as placing tighter restrictions on data access.

What Brands Need to Do

As consumer awareness around data security continues to rise, Google is sparing no efforts to convince users that their data is safe and secure. The company has always been upfront about leveraging user data to provide better services and is always quick to emphasize that it does not, in fact, directly sell any personal data. Still, as Apple continues to hammer down its own privacy-first messaging, Google, whose existing business is built around ad revenues primarily from search and YouTube, knows it will need to do more to earn consumer trust.

In this context, the launch of “My Ad Center” turns out to be a quite interesting move, as it gives users more control over what types of ads they see, beyond the regular “I’m not interested in this ad” options that it currently offers. Instead of letting users report the ads they don’t want to see, Google is now directly asking users what type of ads they do want to see, thus letting users customize their Google ads experience with intention and personal value.

Brands can learn a thing or two from this “pull, not push” approach in regards to personal data and customer preferences. We’re over a decade into the mobile era, and most consumers today do understand that offering personal data in exchange for a more personalized experience can be a worthy trade-off, provided that their data will remain secure and protected. More brands can benefit from setting up a similar data control hub to ask for direct input from customers on what types of brand messaging they prefer to receive, at what frequency, and through which media channels. This way, brands can add a lot of nuances to their first-party data strategy, without needing to aggressively collect data from other sources.

A Note on Pixel Watch & Pixel Buds Pro

Following a preview of its upcoming Android 13, Google concluded the I/O keynote with a slew of quick announcements about its upcoming hardware products, most notably including the much speculated Google Pixel Watch, the first smartwatch in Google’s Pixel product family, as well as new Pixel Buds Pro.

Google Pixel Watch, which will launch this fall along with Pixel 7 phones, features a distinct circular dome design, a tactile crown, and a stainless steel finish. Google says it will provide users with an industry-leading fitness and activity tracking experience, thanks to a deep integration with Fitbit, which Google acquired in 2019 for $2.1 billion. No pricing was announced for this new entry into the smartwatch market, but if it is competitively priced, it may manage to slightly undercut the Apple Watch’s dominance in this market.

Similarly, the Pixel Buds Pro come with active noise cancellation and will retail for $199. In comparison, pricing for Apple’s AirPods Pro, which also features active noise cancellation, starts at $249. In addition, the new Buds will support multipoint connectivity with “compatible phones, tablets, laptops, and TVs,” and will get a spatial audio update later this year.

Historically, Google’s hardware strategy has been a bit inconsistent and haphazard. In a way, following Apple into the wearable market is not a surprising move by any means, and it may not be a bad move either. Some may argue that Google should just focus on what it does best, which is AI-driven software development, and leave the hardware products to third-party manufacturers and partners. But as Apple products, and the Pixel phones to a lesser extent, have proved time and time again, the best user experience comes from a deep integration of hardware and software. Therefore, it makes total sense that Google would make a Pixel smartwatch, as it follows the ongoing computing paradigm shift where consumer attention is ready to migrate away from mobile to other connected devices.

Want to Learn More?

If you are keen to learn more about Google’s latest announcements and what they all mean for your brand, or just want to chat about how to adapt to the changing user behaviors, the Lab is here to help! You can start a conversation by reaching out to our VP of Client Services Josh Mallalieu (josh@ipglab.com).

--

--