What Brands Need To Know About Google 2019 I/O Keynote
Google doubles down on AR and voice, unveils affordable Pixel phones and new smart display, and emphasizes privacy and accessibility
Editor’s Note: This is an abridged edition of our Fast Forward newsletter on the latest announcements from Google’s annual developer conference and their brand marketing implications. For the full version, please contact our VP of Client Services, Josh Mallalieu (email@example.com) to send a request.
Tech conference season continues as Google took the stage to share its latest updates with the world in a 2-hour keynote address that kicked off its annual developer event. Partly thanks to heavy leaks prior to the event, this keynote was light on show-stopping surprises and innovations, focusing instead on improving what Google already does well — search is now supercharged with AR features; Google Assistant gets a speedy upgrade and becomes more capable as the conversational user interface for Pixel phones; podcasts received some attention, but not as much as extensive talks on privacy features and accessibility programs. There was no mention of VR or Google’s new game-streaming service Stadia, likely because Google is saving gaming-related announcements for the E3 event next month.
All things considered, this was a solid, if a little unexciting, outing for Google as it doubles down on what works well for its products and continues to steer users towards AR and voice to capture new sources of growth for its search business. CEO Sundar Pichai proposed that the company mission is to “create a more helpful Google for everyone,” and this statement was echoed throughout the keynote. Emphasizing “helpfulness,” aka providing value to users via convenient access to information and Google services, is a smart move for Google at a time when public opinion is souring on big tech. Of course, a helpful Google is also a Google that most users will happily trust their data with, and that’s exactly what Google is counting on.
AR Integrations with Search and Google Lens
Kicking off the keynote with its core search product, Google announced it is integrating AR features directly into the search results, allowing users to view 3D objects either in 360-degree view or place them in their surrounding environment for further inspection. This addition, which will start to roll out later this year, will no doubt continue to broaden the reach of mobile AR and allow brands to engage more people with their 3D assets. For example, Google demoed an AR experience featuring a virtual pair of New Balance sneakers, which launched directly from the search results, but stopped short of making the sneakers directly purchasable.
Nevertheless, this presents a new channel for brands exploring AR as an intuitive way for consumers to examine products in details, such as IKEA and Target, to engage with more consumers via search. According to CNet, developers can add support for their own 3D objects by adding “just a few lines of code,” and Google says it is already working with NASA, New Balance, Samsung, Target, Volvo, and other groups to add support for their 3D models. Given how the other AR experiences from the likes of Snapchat and Facebook have added support for direct purchases, it seems inevitable that Google will soon make its AR objects in search results shoppable too.
In addition to AR integrations in search, Google is also adding its visual search product Google Lens into search as well. The company touted that Google Lens has been used more than 1 billion times, and just as its search engine has indexed the web, Lens is indexing the real world. Some interesting new features added to Google Lens include the ability to see popular items on a restaurant menu highlighted, with links to their photos and user reviews if available in Google Maps; the ability to recognize a bill and calculate the tip and, if you are splitting the bill, everyone’s share; and the ability to activate digital content in AR form from physical media like magazines and newspapers. These are all very helpful utility applications of AR, and the last feature, in particular, presents an opportunity for brands to work with publishers to create interactive multimedia content that is accessible to all Android users via Google Lens. Users of Pixel phones will also be getting a preview of AR-powered navigation in Google Maps soon, which is another example in Google’s growing utility use cases of mobile AR.
Google Assistant Gets Faster and Smarter
Google Assistant has been a highlight of every Google I/O event in recent years, as the search giant continues to double down on speech recognition and machine learning to make its digital assistant smarter and faster. Following up to last year’s jaw-dropping on-stage demo of Google Duplex, which has now rolled out to users in 43 states in limited testing, Google opted to move away from mimicking human speech and instead focused on applying Duplex to handle web-based tasks for users, such as booking a rental car or a movie ticket. Dubbed “Duplex on the Web,” coming later this year to presumably Pixel devices first, this new feature will leverage machine learning and Google’s user data to guide users through the process and save time. Although Google says Duplex on the Web does not require any action on the business’ side to support it, relevant brands should still index their webpages to make sure Google’s algorithm can easily find what it is looking for.
In addition to these new features courtesy of Duplex, Google Assistant will also get a significant speed upgrade. Thanks to a recent breakthrough Google had in structuring its speech-recognition database and AI model and shrinking it from 10 GB to just 0.5 GB, this updated version of Google Assistant will process requests on device, as opposed to sending it to cloud-based servers. This not only means that Google Assistant will become more secure, it also means it now reacts “ten times faster.” In some cases, using Google Assistant to jump between apps and summon a piece of information (likely from Google services like Gmail and Google Maps) will be quicker than tapping on the phone. This new version of Google Assistant is set to roll out to Pixel phones later this year.
In addition, Google is also rolling out Assistant to cars. It announced Google Assistant will be coming to popular navigation app Waze, which further expands its reach. Plus, It will also gain a new driving mode, coming in summer, to help Android users to transform their old vehicles into a connected car via mobile. Besides the usual features one would normally find with a plugged-in dashboard experience, Google will also suggest music and podcasts for in-car consumption based on user data and preferences.
To make Google Assistant even more useful, Google is now gathering “Personal References” for Google Assistant so that it understands you better when you ask for information on things about your family and friends, dates for personal anniversaries, and so on. It will also roll out a “pick for you” feature where Google Assistant will be able to suggest recipes and events based on past user interactions and likely data from other Google services.
Available on 1 billion devices worldwide, Google Assistant remains a powerful tool for Google to bet on as the next big thing. Google has been making a concerted effort to integrate search, Google Lens, and Express shopping service into Google Assistant to make it more helpful to users, and their efforts are starting to pay off. By speeding it up and making it more personalized, Google is making strides in making Google Assistant more capable of replacing touch as an alternative interface for mobile users today, and prepping it as a primary UI for its wearable devices of the future.
New Nest-Branded Smart Home Display
Speaking of voice interfaces, Google also announced a new Nest-branded smart home display called Nest Hub Max. Like the Google Home Hub released last year, which is now renamed to Nest Hub, this new smart display is powered by Google Assistant and can function primarily like a smart speaker a la Google Home. Unlike its smaller predecessor, however, Nest Hub Max comes with a front-facing camera, making it not only capable of video calls via Google Duo, but also for remote home surveillance via the Nest Cam app, facial recognition for delivering personalized responses, as well as simple gesture controls. Users can simply hold up an open hand for it to stop reminders or media playback. It supports YouTube TV for watching live and on-demand content, although it seems far more suitable to display step-by-step recipes and how-to videos.
Google says it wants to help people build a helpful home with devices that are easy to use, fully personalized, and that respect user privacy. It is evidently clear that Google learned from Facebook’s mistakes with Portal and placed emphasis on the value of adding a camera to the product. Google also made sure the physical switch on the back of the device that can disable the camera to assuage privacy concerns. It is also reflected in Google’s decision to move its smart display products under the Nest branding, which has a stronger association with home security, and strategically distance the camera-equipped device from the other Google Home smart speakers as well as the main Google brand.
Aiming to serve multi-user households, Nest Hub Max will also deliver personalized replies based on a voice-matching feature that, once users opt in and set it up, can distinguish between multiple users by sound and pull up their individual profiles to tailor its responses to their requests. This, coupled with the aforementioned facial recognition feature, can help Nest Hub Max deliver a better user experience to multiple users seamlessly, setting a new standard for other premium smart home products.
Retailing for $229, the Nest Hub Max sits firmly on the higher end of the smart home devices. It is also worth noting that Google opted not to update its smart speaker lineup this time, which it refreshed at last year’s I/O event. The camera-less Nest Home Hub, now discounted to $129, will likely find a second life in hotels rooms with more hospitality brands starting to deploy smart home devices to offer guests a central device to control their rooms (lights, curtains, etc.) and request services.
Podcasts Get Another Push
Interestingly, Google added some new features for podcasts as it continues to expand into the rising medium. Google announced it will begin to index podcasts so that the search algorithm can surface relevant episodes of a given program, based on the full content, not just the title. Native playback support will also be added to Google search so users will be able to listen to the podcast right in search results, or save an episode for later listening. In addition, podcasts got a shoutout in the demo of the new Driving Mode of Google Assistant. Users will be able to seamlessly resume the podcasts they were listening to before they switched to Driving Mode. Other podcasts will also be part of the personalized media recommendations from Google Assistant in Driving Mode.
The company didn’t spend much time on these announcements, so it’s unclear for now what this means for podcast creators, who are currently tracking listens through analytics programs to gauge listener interests and what they respond to. In the future, creators may need to start factoring in some SEO tactics, including keywords and paid search ads, so as to grow their audience through Google search.
Nevertheless, these new features will no doubt help podcasts to grow their listener base and scale up. Google has been getting serious about entering the podcast market after launching its native podcast app on Android last June. Although Android has a larger user base worldwide than iOS, the majority of people who listen to podcasts currently use an iPhone, which means there is a massive untapped audience for Google and other Android players to cultivate and monetize against in the future. Adding more support podcasts is no doubt a positive sign for the market, and Google could leverage its mature digital ad operations to offer a central ad platform for the fragmented podcasting market to help it scale.
Android Q Brings Incremental Improvements
As with every Google I/O event, a new version of Android was unveiled on stage. Codenamed Android Q, this new mobile OS promises incremental improvements, support for mobile innovations like foldable devices and 5G networks, a new dark mode, some new security & privacy features like over-the-air updates of security module, and some new features for managing device usage. One key addition to Android Q is the Live Caption feature, which leverages the speech-to-text technology Google developed to caption any media. This means that Android Q users will be able to get subtitles for all videos on their phone, be it from their own camera roll or social media. Although positioned as an accessibility feature for those with hearing impairment, this feature could also be useful for consuming video content in public settings, especially among caption-loving Gen Z who turn on subtitles to help them focus.
More importantly, Live Caption works well because Google is moving many of the machine learning tasks from the cloud to on-device processing, which not only boosts the speed but also boosts data security since no information would be shared with the cloud servers. In fact, on-device processing is something that Google repeatedly mentioned throughout the keynote as a way to protect user privacy.
Along with Android Q, Google also launched a new set of Pixel smartphones. Keeping with the two-tone design that Pixel phones are known for, the Pixel 3a and Pixel 3a XL differ from the previous Pixel products in that they are positioned as decidedly mid-range smartphones priced at $399 and $479 respectively. This is an interesting strategy for Google, since the U.S. smartphone markets have largely polarized into either the premium, $700+ market and the low-end, budget market segment.
The new Pixel devices attempt to leverage Google’s AI-driven software to compensate for non-premium hardware components. For instance, while they may lack the fancy dual or triple back cameras that most flagship handsets now feature, Google promises to improve photo quality by applying its expertise in computational photography. Not to mention all the new Google Assistant features that are set to roll out to Pixel users first, such as the aforementioned “Duplex on the Web,” AR navigation in Google Maps, and using Google Assistant to filter out spam calls.
All things considered, It seems unlikely that these two Pixel models will be enough to convince premium smartphone users (many of whom are loyal iPhone users) to trade down, but perhaps it will be able to upsell some Android users in the below $300 market to upgrade. Although Google is rolling the new models to more U.S. carriers, on a global level, Pixel still lacks the kind of massive investment in marketing and distribution to make it a major contender in the already saturated smartphone market.
Extensive Talks on Privacy and Accessibility
Responding to the ongoing “techlash” and intensifying regulatory scrutiny, Google, like every other major tech company, made user privacy a key talking point throughout the event. New data privacy features were introduced to protect data security and help users better manage their personal data. Besides the aforementioned switch to on-device processing for machine learning tasks, Google also consolidated data management features into the user profile and made them more readily accessible. There is now an auto-delete option that allows users to wipe their personal data, including search history and location data, every 3 or 18 months, that will be rolling out with Android Q. In addition, incognito mode, already popular among web browsers, will be added to Google Maps & YouTube to allow users to temporarily opt out of being tracked. Google also shared some backend structural changes, such as a new federated learning model, that allowed it to collect less data to feed its machine learning models that improves Google Keyboard’s predictive features.
While it is smart for Google to address privacy concerns with a slew of new features, it is also strategic enough to sidestep talking about its core business model and its sprawling ad operations, which very much still rely on large-scale data collection. What Google demonstrated time and again throughout the keynote, however, is that Google services will work much better for you if they have your data. All the nice personalized features of Google Assistant rely on knowing individual user preferences and identities. As such, Google is essentially asking users to evaluate the trade-offs between personal data and better services and make a decision for themselves.
To a similar end, Google also devoted a significant portion of the stage time to talk about the AI-driven accessibility features it developed on top of Google Assistant. Whether it’s the Live transcribe feature for the speech-impaired, or a real-time speech-to-text feature for relaying phone calls into text for the hearing-impaired, or even a text-to-speech feature in Google Lens that enables the illiterate to read signs and navigate the real world, Google clearly wanted to drive home the message that it is devoted to making its technology work for everyone. While this is certainly a commendable cause that will earn Google some consumer goodwill, Google is also not just doing this for charity. Working on edge cases in speech recognition is a good way for Google to train its AI models and improve its algorithms while empowering the illiterate population is a key strategy for Google to capture the next billion digital users in developing countries.
What Brands Need To Do
In light of all these new announcements from Google, brand marketers need to take note of the importance Google is placing on AR, visual search, and voice assistants and adjust their innovation strategies to reflect the changing user behaviors. Specifically, there are several things that brands can start exploring today:
- Create 3D assets of products or brand IP.
- Pay more attention to your online reputation.
- Explore utility-driven voice experiences and making websites compatible with Google Duplex.
- Consider smart display use cases for your brand.
- Invest in podcasts and consider podcast ads, especially for direct-response campaigns.
Want To Know More?
If you are keen to learn more about Google’s latest announcements and what they all mean for your brand, or just want to chat about how to adapt to the changing user behaviors, the Lab is here to help. You can start a conservation by reaching out to our VP of Client Services Josh Mallalieu (firstname.lastname@example.org).