Actions on Google: I/O recap

Google I/O took place 2 weeks ago. It was packed with developers, sessions, announcements and good weather at Shoreline Amphitheatre. Backtracking one week from I/O, I was celebrating my first day as an intern at Google.

Joining Google Developer Relations, one week before I/O was like hopping onto a supersonic train. Everything was moving fast, and I could feel the excitement among Dev Rel folks who also enjoyed the ride. I first attended I/O last year as a GDG organizer. Being behind the scenes will definitely be one of the highlights of my internship; I got to see the amount of effort and commitment for excellence that goes into putting event of such magnitude.

At I/O 2017, one of my favorite announcements was centered around the Google Assistant.

We announced that Actions on Google are now extended to mobile devices (iOS and Android). This means you can reach a much larger user base and engage them in new ways. For more information, please check actions.google.com.

We’re also excited to have released:

  • Rich responses on surface-enabled devices

Engage user with cards, lists, carousels and suggestions on screen-enabled devices

  • Support for transactions

Assistant apps now allow starting and completing purchases

  • Actions on Google console

Centralized location where you can monitor and test all of your Assistant apps

It’s going to be interesting to see what new applications you will build for this emerging platform. If you wish to learn with code, please check the two codelabs that Ido Green wrote:

  1. Animal jokes — This is a 101 codelab that shows you how to work with api.ai and create a nice bot that tell you funny animal jokes. If you want to get the source code — It’s all in this Animal joker repo.
  2. Your First App for Assistant With Webhook — Bitcoin Info — This codelab is taking you to the next stage, with the ability to call your server (and logic) using a webhook.
  3. You can also copy the agent and the webhook code from this bitcoin-info repo

Additionally, you can watch our step-by-step video tutorial about how to build your app for the Assistant.

Also, watch the recording of all Actions on Google sessions from I/O 17:

VUI:

Imagine trying to read a website through a straw… mentally processing an audio signal feels a lot like that. In our new era of computing where advancements in conversational UIs and artificial intelligence are enabling users in new ways, it’s easy to be tempted to take an existing visual-based app or GUI and simply “convert it to voice”. But while voice brings with it the potential for speed and simplicity, hands-free experiences can become overly complicated easily when based on another mode of interaction. Get the scoop on what types of use cases transfer well to voice interactions and why.

When building a Conversation Action for the Google Assistant, consider that so-called “error events”” don’t have to be treated as edge cases. Instead, these can become opportunities to forge meaningful exchanges that leverage users’ mental models of how everyday conversations unfold. This talk will frame a new way of approaching fallback and repair, in which so-called errors become organic turns in the dialog. We will provide design tips for your action logic to allow the conversation to move forward naturally.

While the medium may change, the story remains the key element in engaging audiences. Come learn from the storytelling masters at PullString how they apply the learnings from over a decade working at Pixar to the new medium of conversational interfaces.

As you gear up to build great Conversation Actions for the Google Assistant, find out how to leverage one of the principles that practically defines what it means to be “conversational” — our ability to take mental leaps, to draw inferences, to be informative, to feel like we’re making progress. Conversation is systematic, but to the surprise of many technologists, this conversational principle actually defies the rules of formal logic. So come find out the non-literal truth of everyday back-and-forths; take advantage of this principled, built-in “hack” of spoken language; and delight your users with the intuitive ease of everyday conversation.

API.AI:

API.AI helps developers build unique conversational experiences for their products, services and devices. It provides a toolset for designing interactions with users and a powerful natural language understanding engine to process user requests. By using API.AI, developers can build actions for Google Home, develop and launch chat bots or add voice to their robots. In this session we will explore how to use API.AI to design, build and analyze advanced conversational UX that may work across different platforms.

Development:

The Google Assistant’s mission is to help users get things done in their world. This session will explain how to plug into the Google Assistant services ecosystem. We’ll cover everything from understanding the business use case and high level user interface design to implementation and growing usage. By the end of this session, you should have a better understanding of the Assistant service ecosystem and how to get started. Particularly, presenters will show how to use Transaction API, add rich experience to mobile by using suggestion chips and cards.

Multimodal interactions are coming to life on a wide range of surfaces and operate on a set of rules defined in your interaction model. But the tenets of a multimodal interaction vary wildly depending on whether you’re designing for a mobile device, a TV, a car, etc. We’ll delve into some of the things you need to consider when building a model for various surfaces.

Come learn how to use the Actions on Google platform for home automation integrations. You will walk away from this session with the skills necessary for the Google Assistant to control your smart home devices.

If an action speaks in the forest and no one hears it, did it make a sound? The focus for this talk is discovery — we want users to be able to find the awesome Actions on Google experiences you’ve built. We’ll talk about triggering, directory listings, submissions, and overall best practices for getting your experience discovered by Google Assistant users.

Users are turning to the Google Assistant to help with more real world tasks like scheduling appointments, booking services, and shopping. Enable your users to make purchases and set appointments with the Actions on Google platform. This talk will follow concrete examples, detailing elements like payments, user authentication, and order lifecycle.

There were many other interesting announcements at I/O; I would encourage you to read more about them here.