Google I/O 2018: AI for everyone

I was attending Google I/O 2018 in Mountain View, CA this year. It was a massive event with about 7000 participants from all over the world. Being a dev myself I was interested in basically everything Google I/O had to offer. In particular I wanted to learn how all this can be applied to support a better cruise experience for our guests.

Opening Session of Google I/O 2018 at the Shoreline Amphitheater, Mountain View, CA

The opening keynote on Tuesday Morning by Google CEO Sundar Pichai set the scene for the upcoming days. Not surprisingly, it was all about AI as the whole industry is rapidly moving towards Voice Enabled Interfaces, Machine Learning (ML), Data Driven Adaptive User Interfaces, Predictive Systems to name a few. Surprisingly (at least for me), the keynote had nothing about Fuchsia (the somehow mysterious new mobile operating system and potential Android alternative/successor) and little on Flutter — a new cross platform mobile development environment based on Google’s programming language Dart. However, there were at least a couple of (quite impressive) sessions on Flutter and a new Flutter beta has been released during the conference.

AI for good

Back to the keynote Pichai presented a couple of showcases to demonstrate Google’s recent advances in the AI area. Knowing there’s a lot of debate in how data is being used by global Internet organizations he came up with examples how Google is using Data and AI for good. As for me he did this very convincing with various cases from the health sector. It was impressive to see how AI can help speech impaired people to communicate using a morse code keyboard in combination with Google’s new Machine Learning (ML) powered text prediction. Google has made the same text prediction technology available in Gmail as an experimental feature called “Smart Compose” (go to the general tab in your Gmail settings, scroll down, and enable “experimental access.”)

Google Duplex: Let the AI make your appointments

Another demo was scary and impressive at the same time (as with many advanced AI demos today). Pichai showed how a phone on behalf of the user can arrange an appointment with a hair salon using a natural language dialog on the phone. Actually the whole conversation was dispatched through Google Assistant and activated through some AI in the Cloud. The software arranged the appointment without any human intervention. The hair dresser on the phone has not even realized that she was talking to a machine.

Demo of Google Duplex

With this example, the Google AI has perfectly passed the Touring test (a method suggested by Alan Touring in 1950 to tell if a computer program is ‘intelligent’). Two obvious questions arise: What does it mean for us if you can not tell anymore if its a human or a machine you are talking to? What happens if on the other end there is also an AI but an human answering? These questions are around for quite a while (I remember all the discussions we had about this in the beginning of the 90ies when I studied AI) but are today more relevant than ever.

Google has published some background information and more examples on their blog.

What’s new in Android (Hint: it’s AI!)

Of course, AI also has a major impact on future Android releases. In the upcoming Android Release P we will see an intelligent battery and brightness management based on usage patterns learned by the AI included in Android. If it keeps up the promises it will give us another 30–40% plus of battery life. However, the real gamer changer might be Android P’s capability to predict the user behavior and to recommend so called App Actions. Given an app has been designed to use this new service, Android might display certain app actions depending on relevance or user patterns without launching the app. This could be on the home screen, during a search or even Google Assistant starting an interactive dialog (“Your excursion starts in an hour, do you need some more information?”). You might think of App Actions as a more intelligent and more interactive kind of today’s push notifications.

Machine Learning in Apps

Simple MLKit demo app: Phone detects a volcano using a local model

To create custom AI based interactions in an app Google announced the availability of a Machine Learning service called MLKit for iOS and Android. The idea behind MLKit is that developers can relatively easy use an ML model (that has been trained in the Cloud before) in their app. Once integrated, the app can use the model offline, e.g. to detect objects in photos or to recognize text in images. I tried this myself in a codelab session later during the conference and was impressed how far you can get with some simple extensions to your app.

AI and IoT

Other announcements in that area included a Google Assistant with a visual interface (similar to Alexa Show by Amazon). It was previously showcased at CES in January this year and will be in stores from various vendors this summer. The devices look promising but I was even more impressed how Google started the prototype development in a very agile fashion using some electronics and Lego™ bricks. In fact, Google makes IoT prototyping fun with android things. It allows you to prototype novel kind of hardware devices that could either operate completely offline or be connected to existing cloud services. As the parts are certified by Google the transition from a prototype to mass production will be much easier. And the best of it: I/O participants received an android things starter kit to experiment with — after taking part in a typical Google like scavenger hunt.

IoT prototyping with Lego™ and android things

AI powered Google Maps

Google also spent some reasonable amount of time in the keynote to present how its core products like Google Maps and Google Photos will leverage advances in AI technology. Google Maps will turn from simply showing places and directions into an intelligent assistant app based on location based recommendations. Google demoed an amazing integration of StreetView, Maps, and Augmented Reality in the Maps application. It allows e.g. to automatically detect text in the street images to provide more context sensitive information or translations in realtime to the user. Although not completely new, this time Google run the demo on a standard Android device rather than in the context of Project Tango. All this will make Google Apps very useful for travelers — cruisers of course included.

Digital Wellbeing

And if you are now totally confused by all the tech and things going on on your device Google also has something new for you: They call it Digital Wellbeing. For instance, the screen might turn into a black and white Wind Down mode in the evening switching off everything that might distract you. Or you might want to track your phone and app usage in the new Android Dashboard. And finally, the new Shush gesture quickly switches your phone into “Do Not Disturb” mode.

Join us!

All in all Google I/O 2018 was an absolutely amazing and inspiring conference for me. Lot’s of the ideas and technology being presented including Machine Learning, Conversional Interfaces, IoT and Augmented Reality may well influence the digital experience of the smart Cruise Ship of the future. And yes, if you want to join us on that journey, we are always looking for talent!