Google Lens. Welcome to the future of search.
In this long-form article, I look at the announcement at Google I/O 2017 introducing Google Lens, a new way for consumers to search. I’ll investigate three hypotheses that explore why the announcement is significant for search marketers and look at four ways that marketers should think about what this means for their business in the mid to long-term.
At Google I/O’s keynote event yesterday, Sundar Pichai made a set announcements around the latest AI technology that the company has been working on, effectively reinventing Alphabet as an AI company. The product I’m most excited by is called Google Lens, a set of vision-based computing capabilities that can understand what you’re looking at and help you take action based on that information, or in simple terms, it’s an evolutionary shift in how we will search and interact with objects in the future.
Initially, Lens will begin rolling out to Google Photos and Google Assistant, but it’s likely to move into other Google products as the platform matures.
Here’s the announcement to give you a sense of what was announced:
What always amazes me is how Google carefully craft and test foundational elements of the new product in earlier products, with features many observers might have commented on as being gimmicky. We saw some of this in the now shuttered Google Goggles (which allowed us to get information from landmarks, paintings etc) and Google Translate which has offered image based translation for a few months (ie point your phone at a sign and it will dynamically translate to your preferred language).
As a business rooted in search, we’ve seen analysts debate whether Google can continue its growth in a “legacy” business such as search and that Google’s move into Cloud would be the solution to future-proof the business. But, as I look at Google Lens I see a future that’s really interesting and exciting for search — as it was last year when we started thinking about the implications of voice search (which is still growing and very significant!).
Why is Google Lens such a large evolution in search?
I’m going to present three hypotheses on why the announcement is so important, and then have a think about the marketing opportunities this will present in the future (no marketing features were mentioned in I/O which is an event for developers)
1. By offering a utility feature, we’ll see strong consumer adoption of Google Lens technology
Anyone who’s been involved in mobile app development or promotion will be aware that it is extremely difficult to generate ongoing, regular user engagement with an app. If your app doesn’t provide a utility feature (eg communicating, sharing, learning, searching) that promotes daily / weekly use it’s likely to end up in the app graveyard.
ComScore (via Marketing Land) reported in 2016 that 85% of all mobile app time is concentrated on just 5 apps, from FANG (Facebook, Amazon, Netflix, Google) companies. This means Google already have a key advantage in building and retaining loyalty, and by adding a valuable feature to apps like Google Photos and Google Assistant this opportunity is reinforced.
A utility focus inside technology/apps is often considered dull, but as it exerts the strongest consumer loyalties we need to recognise the value exchange is simple. The key problem with any new technology there is a struggle to get consumers to adopt (ie trained) and the simplest solution is by getting consumers to use the feature so frequently that it becomes second nature.
In the case of Google Lens this utility feature is really embedded inside Google Photos — the ability to “ask questions”, the use case described by Google was extracting a telephone number from a photo. No doubt the engineering is extremely complex, this is likely very simple to use — a great example of utility.
For more thoughts on utility features for search marketers, please take a look at my previous article — Google Pixel. 3 ways this launch is about search, not hardware.
2. Boring features will be more valuable for Google growth than flashy features
OK, maybe not boring — but when we compare the example use case of pointing your phone at a restaurant and getting a review back vs Snapchat and Instagram filters it’s quite clear that the features Google are offering are not as exciting. They are however things that will soon become so ubiquitous that we forget they were ever a feature. And that’s a really good thing — do you think of windscreen wipers very often? Probably not, but a brilliant yet dull feature.
Our ability to search real world objects and get back information on them has been the aspiration of companies such as Blippar who have been developing machine learning solutions to achieve this goal — Google is certainly not the first to market — but they certainly have the consumer reach to succeed.
The same stands for search marketing — the original digital media channel has lost much of its glamour over the past few years when compared with opportunities in programmatic, native advertising or social. But as we’ve seen from Google’s investment in its search product over the past couple of years, ranging from Enhanced Bidding that accelerated the shift to mobile through to the recent Customer Match and Similar Audiences updates that have improved audience addressability, search is now much more relevant than it’s ever been.
And look, this is not to say that Snapchat filters are not valuable, they are and will continue to be for brands — but I genuinely believe more in “boring” utility product features for tech players as a consumer, marketer, and investor.
3. Google Tango and its Visual Positioning Service (VPS) will integrate AR functionality
Google Tango is a quiet side project at Google, primarily focused on the education sector. At the moment Tango is limited to a handful of devices and pilot projects in the AR space, but for me, the application with the most potential is a smaller project called VPS (Visual Positioning Service).
Our new Visual Positioning Service (VPS), which helps devices quickly and accurately understand their location indoors. While GPS is great for getting you to the storefront, with VPS your device can direct you right to the item you’re looking for once inside.
At the moment VPS is being positioned as an indoor mapping service, with use cases suggesting support for visually impaired people — I see some mid to long-term opportunities that could be really interesting for brand advertisers. The technology — which having read through the developer documentation I can conclude is extremely complex — at its core allows developers to map any surface and overlay content over it.
Over time I see the convergence of the VPS capabilities with the AI of Google Lens that will create a holistic, data-powered overlay to everything we see — that’s definitely interesting or both consumers and marketers.
While researching this article I discovered an amazing project by the WWF in Singapore (where I’m based) that utilises Tango AR technology:
What’s the impact for marketers?
At this stage Google haven’t announced any marketing opportunities within the new platforms, so my thoughts here are pure speculation on what could happen in the next few months or next few years — but given the past trajectory of Google and knowing that there will need to be revenue streams from these projects I’m certain that an iteration of one or more of these suggestions will happen.
In my previous article I suggested that the future of search would be driven by (1) Google Assistant driving voice search adoption, (2) cognitive search and (3) api powered search — all of these things are now starting to happen, and the Google Lens announcements add further options to this future.
1. AR paid local listings ads will launch
I previously posited that Google’s advancements in AI may leave its search business dead because Google would push information directly to users. What I did not see what that the investment in AI has led to the push to camera based search. I’m very confident this will happen, as the announcement suggested that a use case would be searching for restaurants by pointing the phone.
As has happened with standard search over the years this naturally lends itself to an immediate need for marketers to step up their local search activities, integrating rich snippets (for reviews etc) to ensure natural listings are well populated, and in future we could definitely see sponsored searches appear, probably in the form of “similar Pizza restaurants nearby”.
2. AR eCommerce will become next-gen Google Shopping
Outside of China eCommerce is still a very small component of the retail experience, less than 12% of all retail is online today, but with the likes of Amazon and Lazada making continual pushes to drive change in consumer behaviour it’s only natural to assume that a future iteration of Google Shopping will be image based, and linked to the camera.
A simple use case I could imagine working in my house would be that of my son playing with a toy at a friends apartment and demanding he wants the same toy for his birthday — a quick snap of the toy with my phone and I’m connected to Lazada to buy the product via a paid shopping ad.
3. Brands will create AR utility services via new APIs
In my last article, I mentioned that I’m a big fan of my power company’s app in Singapore “SP Services” — this app has a feature that would lend itself very well to more automation — meter reading. At the moment the power company only read meters every two months, and in-between I’m given an option to manually enter my readings — so a feature that automatically scans my meter reading from a photo would save me time and make me a happier customer. Of course, the more cynical readers would comment that why doesn’t the power company move over to IOT devices that eliminate all forms of meter reading — which would be even better!
Many other industries could benefit from this kind of automation for both consumers and businesses.
4. Product-level data services will open up retail search
Over the past 12 months, SEO teams around the world have been working with CPG clients improving the quality of and adding to metadata on eCommerce marketplaces. This is extremely valuable for the consumer by improving their experience in marketplaces, but imagine this level of data is available during a supermarket shop — for example, Knorr offering recipe details and directions to the products you need to buy, or parents with allergy-ridden kids can check for ingredients before buying. As product-level metadata improves, it’s a natural transition to make this happen once the technology enables the search to complete.
This scenario is interesting — it’s likely less about a paid search offering, but one that’s more driven at an API level partnership between brands and Google as a facilitator of enhanced search.
Once again we’ve seen that progressive innovation over rounds of products (Google Glass, Google Goggles, Google Translate, Word Lens) will roll up into a comprehensive feature set that developers (and in the future marketers) will enjoy interacting with and consumers will enjoy using in their everyday lives.
Search has once again (for the second time in the last 12 months) become sexy again. And that’s a good thing for consumers. It may be some time before some of my predictions become actionable for marketers, but for now, I think it’s important for everyone to keep one eye on today’s opportunities and one eye on the future — things move faster than we think.
Let’s also not forget that Google isn’t the only player in this space — both Blippar and Pinterest are very active in visual search — Blippar, in particular, has a significant lead in terms of AR capabilities. Pinterest Lens did seem to have a little fun trolling Google after the announcements!