Build Your First Assistant App For Google Home
In the past few months, I heard someone smart saying that “the future is artificial intelligence first”.
Artificial intelligence, is making computers “smart” so they can think on their own and be even more helpful for us. It’s clear that Google, has been investing heavily in the areas of:
- Machine learning — Teaching computers how to see patterns in data and act on it.
- Speech recognition and Language understanding — Meaning, being able to understand you when you are talking with all the little differences and nuance.
These days we can see it all come together in the Google Assistant. It allows you to have a conversation with Google and be more productive. In this post, we will see how it’s all working by building a new Action for Google home. In the same time, we will have a nice bot that in the future we will integrate with other services (e.g. Slack).
Google Home is a voice activated speaker that users keep in their home. The Google Assistant is the conversation between the users and Google. They can get things done by talking with the Assistant. There are many things users can do by just using the Assistant directly. To learn more about the assistant check out this short video below.
Actions on Google allows developers to extend the assistant. That is what we are going to focus on today in our animal joke example. This post will walk you through creating your own Action on Google with API.AI.
Btw, you can see the video version of this post below.
We are going to use API.AI which is a conversational user experience platform, or in other words, it will help us ‘talk’ to machines in a way they will understand us better.
Let’s start from the end. Please click on the image below and play with our bot to see what is going on. You can start with something like: “please tell me a joke about a dog”
How does a conversation action works?
The user needs to invoke your action. You say a phrase like “Ok Google, talk to Animal joke”. This tells Google the name of the action to talk to.
From this point onwards, the user is now talking to your conversation action. Your action generates dialog output, which then spoken to the user. The user then makes requests, your action processes it, and replies back again. The user has a two way dialog until the conversation is finished.
See below, if you like diagrams to ‘see’ what we explained above.
What is API.AI?
API.AI let’s the machine understand what the user is trying to say, and can provide response. You type in example sentences of things that a user might speak.
You can specify what values you need to get from the user. It then uses machine learning to understand the sentences and manage the conversation.
Click the following link to login to API.AI.
After the login you can create your first agent. You will need to:
- Give your agent a name.
In our case, it will be “AnimalJoker”. Please note that the agent name can not contain any spaces between the words.
- Give a short description so other users will know what this action is going to do.
In our case, type: “An action that tells animal jokes. But only the good ones”.
- Click on ‘Save’.
It’s the button in the top-right corner of the screen.
What are entities?
Entities are the values we are trying to capture from the user phrases. Kind of like filling out a form, requesting details from the user. API.AI looks to extract these out, and will do follow up prompts until done. This is how an entity looks in API.AI
We will create an Animal entity.
First step is to click on the ‘Create Entity’ button (it’s at the top-right corner).
Next you should start typing animals’ names.
The final results should look similar to the image below.
Things to remember:
- You should ‘help’ API.AI machine learning algorithm to train itself by providing synonyms. For example, dog could also be puppy. In our case, you can give it only 2–3 animals. That will be fine for now.
- In the real world, try to give many examples so it will cover more cases.
What is an intent?
An Intent is triggered by a series of “user says” phrases. This could be something like “please tell me an animal joke” or “Give me a recipe for burger”.
You need to specify enough sentences to train API.AI’s machine learning algorithm. Then even if the user doesn’t say exactly the words you typed here, API.AI can still understand them!
You should create separate intents for different types of actions though.
Don’t try to combine all of them together.
In our example, we will create only two intents:
- Tell_Joke intent — This intent will handle the jokes.
- Quit intent — This intent will handle the part when the user wish to finish the action.
Build the “Tell_Joke” intent
After we have our new $Animal entity. If you notice the $ before the word — It’s not a mistake. This is the way we will refer to our new entity from now. Think of it as a special sign to show us that we are referring to our entity and not just another animal.
It’s time to create the intent that will tell us the jokes.
First, click on the ‘Create Intent’ button.
Second, start typing few sentences that you will want to use to get a joke. For example, “please tell me a joke on dogs”. Type a few sentences so API.AI could start training its algorithms. You can see that while you type, API.AI automatically recognizes that the phrase includes one of the entities, so it highlights it.
See below how it should look like.
Next, we are skipping the ‘events’ part, and in the ‘Action’ section we need to make sure that our @Animal entity is required and in the “user says” input line, we should type “Please tell me which animal you like” so in cases where the user didn’t name an animal, it will be clear to her that we need this entity.
Finally, in the ‘Text Response’ section we are filling our most amazing jokes. You can take few ideas from the image below.
Please note that we are using the $Animal value in our response in order to create a joke that is based on the animal that the user asked.
After you fill all your amazing jokes, don’t forget to click on the ‘save’ button on the top-right corner of the screen.
Build the “Quit” intent
A good design principle is to allow our user to end the conversation.
You should click again on ‘Create Intent’ button. Than, start typing few sentences that will end the conversation. For example, “bye bye” or “bye animal joker”.
Below is how this intent should look like.
Last, but not least, you need to check the ‘end conversation’ checkbox so that it will know to really end the conversation at this point.
We are almost done!
Btw, if you wish to load all these definition without following this tutorial step by step, you can do it with 3 simple steps.
See the image below.
Once you import everything from the zip file, you can start to add or edit more intent and entities.
Click on ‘Integrations’ in the right side menu. This will open the Agent page with all the options to integrate it with other services (e.g. chat apps, twitter etc’).
You have two easy and quick ways to test your creation. one is to follow the link that you see under ‘Agent Page’. But don’t forget to set the ‘Publish’ switch before you click on the link to your new bot.
Another way is to click on “Actions on Google” box under “One-click integration”. This will enable you to test your work as it will be running on Google Home.
Once you click on ‘Actions on Google’ you will see this dialog:
Fill the invocation name and click on ‘Preview’ button.
You will get a screen that let’s you talk with the simulator. See the image below.
The cool aspect of the web simulator is that it will give you all the answers both in english as text/sounds and on the right side, the full JSON object.
- There are some policies about what an Assistant app can be named and support.
- Only Assistant apps, with a well defined invocation, are supported. Meaning, the trigger sentence should be clear and short.
- The guidelines explain all the rules around them.
In the next post I’ll we will go a bit deeper with webhooks and the ability to use your own engines in order to provide answer with useful information.
Be strong and build amazing actions!
Originally published on Ido Green