Fast Forward: Google Shows Off AI Prowess at Annual Developer Conference
And Everything Else That Brands Need To Know from Google I/O 2018
Editor’s note: This is an abridged edition of our Fast Forward newsletter featuring major trends we spotted this week at Google I/O and industry-specific brand suggestions. For the full version, please contact our VP of Client Services, Josh Mallalieu (josh@ipglab.com) to send a request.
Google kicked off its annual I/O developer event in Mountain View, CA this Tuesday and announced a string of updates ranging from the next version of Android to new Google Assistant capabilities. Underlying all the announcements made during the opening keynote is a throughline of Google’s AI superiority. CEO Sundar Pichai began by recognizing “the deep responsibility” to use its technological prowess for the greater good, before quickly touting how advances in Google’s AI research are being used in healthcare to help doctors make diagnoses and predict patient patterns.
Throughout the keynote address, Google demonstrated how it is eagerly applying its AI and machine learning tools to all aspects of their products and services so as to provide users with a more intuitive and personalized experience. From improving the accessibility of its services for the handicapped to adding new AI-powered smart features to Gmail and Google Photos, AI is deeply embedded in everything that Google does nowadays. Let’s take a look one by one at all the things that Google announced that are relevant to marketers and what brands can do in response.
What Google Announced
Google Assistant Gets Naturally Conversational & Visually Assistive
Google shared that its voice assistant is now available on 500 million devices worldwide and works with five thousand different kinds of connected home devices. While it still trails Amazon Alexa in term of smart speaker market share, what Google lacks in domestic market penetration, it more than makes it up in its global reach (now available in 80 countries supporting 30 languages), as well as its variety. Six new voices — including one based on Singer John Legend’s voice and all powered by Wavenet tech — will become available for users to choose from later this year. Google is also updating its voice assistant to add smaller features like Continued Conversation and Multiple Actions, all serving to make the conversation flows more naturally.
One of the highlights of the keynote came when Google showcased its Google Duplex feature for Google Assistant, which allows Google Assistant to call businesses and conduct incredibly natural-sounding conversations to book reservations and find out about opening hours on behalf of the users. Listen to some of the audio clips here, and you’ll likely be astonished by how good Google’s AI assistant has gotten at mimicking human speech with all the nuances of cadences and pauses. Putting aside the potential ethical problems of fooling the customer services reps with a human-passing AI, this is a major step for Google Assistant to become truly conversational and carry out real-world tasks for the users.
Moreover, Google Assistant is finally getting some visual assists as well. On stage, Google demoed how smart displays — stationary, touchscreen-equipped smart speakers made by third-party manufacturers like JBL and Lenovo that were first unveiled at CES in January and scheduled for release in July — will be able to provide some visual components to Google Assistant’s replies as needed. On stage, Google demoed a smart display made by Lenovo, getting Google Assistant to play a specific show via YouTube TV and to find recipes that show up with instructional videos.
The visual results can also be seamlessly synced to user’s phones for a closer look and adjustments, as seen in a demo in which Google Assistant was asked to change the room temperature via a connected Nest thermostat. With those supporting visual elements, Google will be able to deliver a rich multi-sensory experience similar to that of the Echo Show or Echo Spot.
Google Lens Learns New Tricks And Gets Into Cameras
After finally being released to all Android phones via Google Photos and Google Assistant earlier this year, Google is finally ready to push Google Lens to a wider audience. Starting next week, Lens will be integrated into the native camera app on the Google Pixel phones and the LG G7. Google says it will come to other major Android headsets soon. While this will dramatically increase the exposure of the visual search feature and help more users discover this feature, it also risks turning off a wider range of Android users if it can’t deliver the results it seemingly promises, similar to the way that Siri’s poor performance at launch turned away a lot of users, who simply refused to give Siri another chance despite its improvement in recent years. Given the general disappointment with Google Lens since its initial wide release back in February, Google clearly still has some work to do in terms of managing the feature roll-out and setting user expectations.
Besides the expansion in access, this update also comes with a trio of new features. First up is a new feature called Smart Text Selection that lets you copy and paste words from the real world around you, say from a restaurant menu or a road sign, into your phone. Then there is Style match, which will finally allow Google Lens to do what Pinterest Lens has been doing for a while: surfacing similar-looking items in a shoppable format. Users can simply point Lens at an outfit and Google will pick out the individual items, such as shoes and coats, and find items that are of a similar style that are available for purchase via Google Shopping.
Last but certainly not the least, there is the “real-time results,” which will enable Lens to start surfacing information proactively once you open your camera. By leveraging Google’s cloud TPU and machine learning for quick object recognition and processing, this new feature will allow Google to digitally overlay relevant content and information on real-world posters and street signs. For example, you can point your phone at a poster for a concert, and Lens will automatically overlay a music video by that artist (from YouTube, naturally) onto that poster.
Speaking of AR, Google also unveiled an update of its ARCore SDK that comes with three major new features — Sceneform, Augmented Images, and Cloud Anchors. The first two features make it easier for developers to create virtual scenes and objects for AR experiences while Cloud Anchors will allow developers to create a group AR experience where multiple multiple ARCore device users within a shared space will be able to collaboratively interact with AR objects and see the live results from their own perspectives. And unlike the first two features, Cloud Anchors will be offered for both iOS and Android devices to encourage cross-device use.
Google Maps Gets AR Navigation And AI Recommendations
Google Lens wasn’t the only Google product that got a major AR upgrade. With the help of VPS (visual positioning system) and AR overlays, Google Maps users will soon be able to hold up their cameras and find clearly marked visual cues in the environment to help them figure out exactly which street they are on and which way to turn in order to get to their destinations, even prompting a cute cartoon fox to lead the way. By introducing AR into Maps for pedestrian navigation, Google finally started to tackle one of the most obvious and sought-after real-world use cases of augmented reality, which will have a good chance at introducing mobile AR to an even wider audience.
Besides AR navigations, Google is also updating Maps with a couple of cool AI-powered discovery features. The explore tab has been revamped as “for you” to surface more personalized recommendations about what’s new in your area, as well as trending restaurants as determined by Google’s algorithms and your personal preferences. Google will also be adding a “match score” to indicate how likely you are to enjoy the place based on factors such as your food and drink preferences (which you’ve previously selected in Maps), the places you’ve been to, and whether you’ve rated a restaurant or added it to a list. Both features aim to aid local discovery and will make reviews on Google Maps increasingly important for businesses.
Android P Gets Major AI Boost And A Digital Wellbeing Focus
It wouldn’t be Google I/O without unveiling of the next generation of Android. This year, Google highlighted three key aspect of its next-gen mobile OS — Simplicity (mainly through new gesture controls that streamlines the interactions), Intelligence through a bevy of AI-powered features, and a new concept called “Digital Wellbeing” that aims to help users kick their phone addiction.
In terms of AI-powered intelligence features, Google highlights adaptive battery and brightness as two examples of how Android P uses AI to learn from user habits and predict their use patterns so as to preemptively adjust battery usage and screen brightness to provide a more enduring and optimized user experience. It also introduced App Actions & Slices, two features that intelligently serve up prompts and key pieces of content and features from various apps right in the search results to make it easier for users to get stuff done. For example, when a user searches for the latest Avengers movie from the Android search bar, App Actions would suggest they book a ticket via the Fandango app or watch the trailer on YouTube, which users will be able to initiate with just one simple tap.
Google is also hoping to make it easier for developers to add AI-powered features to their apps with the launch of ML Kit. This new SDK offers pre-built machine learning models for tasks such as image labeling, face detection, and text recognition, which can normally be difficult and expensive for independent developers to access. Notably, this developer tool will also include some offline models that developers can use for free to make their app smarter. It is based on Google’s TensorFlow Lite and will be compatible with both Android and iOS.
Many recent think pieces and medical studies have been devoted to dissecting the widespread smartphone addiction of our time, and Google is set to address that with a set of new features, such as a Dashboard that shows you how many times you check your phone every day and how much time you spent in each app throughout the day, a Shush feature that automatically puts the phone in Do Not Disturb mode when it is put face-down onto a surface, as well as a Wind Down mode that turns the screen into grayscale once it gets to your scheduled bedtime to disencourage usage. All these features are set to roll out to Android P this fall, starting with Google’s Pixel phones.
Google News Revamped As Standalone App To Capture Attention
Another area that Google is now leveraging AI to break into is news consumption. The company debuted a revamped Google News app that will be available across platforms including Android, iOS and the web. Clearly positioned as a competitor to Apple News and Facebook’s News Feed, this new Google News service aims to use AI to analyze a constant flow of news report in real time and organize it into narratives by topics. Users will be able to do a deep dive on news stories and get different perspectives on it by digging deeper into all the relevant reports curated and compiled by Google’s algorithms. The app will also integrate multimedia content including tweets, videos, and interactive timelines to offer a rich reading experience.
It makes sense for Google to launch a standalone, cross-platform news app to better compete with Facebook for consumer attention as well as advertising opportunities. With all the disinformation and consumer distrust aimed at Facebook, Google clearly saw an opening to provide an alternative way for people to discover news relevant to their interests off social media sites. Google says the service will draw content from trusted news sources, which sounds promising. However, a Google spokesperson says the service will not use human editors, nor will it partner with specific news organizations, which means AI will be solely responsible for the authenticity and accuracy of the news it curates.
Waymo To Launch Autonomous Ride Service In Phoenix
To close out Google’s AI-led keynote address, CEO of Waymo, John Krafcik took the stage and boasted the superiority of its self-driving tech. According to Krafcik, Waymo is currently the only self-driving car company with a fleet of fully autonomous vehicles on public roads — with 6 million miles so far — that has fed its systems a huge amount of real-world data to learn from. The company, a subsidiary of Google’s parent company Alphabet, started on-road testing in Phoenix, AZ last year with members of its “early riders program”, which the company says is looking to officially launch as an on-demand ride-sharing service in that city “later this year.”
Beyond that, nothing particularly new about Waymo was announced. Nevertheless, it nicely rounded out Google’s theme of AI dominance in this year’s keynote. By touting its lead in the increasingly promising self-driving vehicle domain, Google is establishing a solid narrative of how the company is leading the AI research and applying it to do things for users in all matters of life, from suggesting a restaurant that Google thinks you will like, to calling that restaurant for reservations, and to, one day soon, driving you to that restaurant.
What Google Didn’t Say
Much as what Google announced reveals their strategy, so does what they chose to left out of the keynote address. There was zero mention of VR and the Daydream VR headset that took up significant stage time in the two years past, signifying Google’s dwindling confidence in consumer VR at the moment. Also missing from the keynote was the Wear OS, or anything resembling a wearable strategy from Google, leaving the market wide open for Apple to take.
Unlike the two previous years, this year’s keynote featured no dedicated section for YouTube or Google Home, only a few mentions in passing. Most importantly, barring a brief demo of a Lenovo Smart Display for Google Assistant, there was absolutely no hardware demo or announcements throughout the entire event. Overall, Google’s hardware strategy remains unclear, and it is getting harder and harder to disassociate all the wonderful AI-powered services from the devices that they are supposed to run on. Saving for a few truly cross-platform Google services like Maps and Gmail, it seems doubtful that Google would be able to realize the full potential of its AI-powered services if it cannot deliver a winning hardware/service combo.
What Brands Need To Do
Overall, this Google I/O event is a strong showcase of how Google is able to leverage its free, data-capturing services to improve its AI capabilities. AI and machine learning are becoming instrumental in delivering an intuitive and personalized user experience, and the key takeaway for brands here is to start utilizing all the developer tools available, such as Google’s ML Kit and Amazon’s SageMaker platform, to incorporate AI-powered automation and prediction into their digital experience.
For example, a CPG brand should look into using visual search to drive product discovery and facilitate purchases. Some early adopting retailers have been applying machine learning to their logistical operation and CRM systems, and now is time to deliver a more personalized shopping experience for customers.
The stunning Google Assistant demos, while still far from an official roll-out, offered a glimpse into just how sophisticated Google’s AI has become in carrying out a natural sounding conversation. As conversational technology continues to mature and start to incorporate visual component, it is time for brands to create a richer voice experience or upgrade their existing ones with more conversational skills such as multiple actions and visual elements as needed. For fashion and beauty brands especially, the added visual elements will be a great addition to the voice experience to showcase the products and tutorials.
The continued rollout of Google Lens and the addition of AR features to both Google Lens and Google Maps present new brand discovery opportunities in the local, offline context. As AR continues to blend the digital information with our physical surrounding, there is a real opportunity for brands to surface additional content and ecommerce channel in a contextually relevant manner.
For entertainment brands, for example, these updates present a myriad of marketing opportunities. Google Lens will soon to be able to place additional content, such as movie trailers and music videos to be virtually placed over posters and billboards, offering entertainment brands a chance to further engage with interested consumers. AR-powered navigation could work wonders for theme parks and other outdoor festivals.
Want To Know More?
If you’d like a customized deep-dive into how your brand can best leverage Google’s tools and services to effectively reach customers, please reach out to our VP of Client Services, Josh Mallalieu (josh@ipglab.com) to start a conversation.