Personalize for each user & device: the magic Conversation object

Eliza Camber
Google Developer Experts
6 min readMar 8, 2019

What sets apart a good agent is personalizing the experience for each user, adapting to their needs and offer the best experience for whichever device they are currently using. With Actions on Google (AoG) customizing the responses, storing information and manipulating the medium of the displayed response, can all be achieved by learning how to leverage the Conversation Object.

If you’re already familiar with the DialogflowConversation object and want to see the code in some real-world examples, jump here.

If you are starting with Actions on Google and DialogFlow now, you’re going to come across a lot of times to an object named conv (DialogflowConversation). You’re also going to see that the starting code you have at your project’s webhook, is quite different from the documentation. Both platforms are very new and constantly improving so that’s natural.

Snippet from the documentation:

function simpleResponse(conv) {
conv.data.count = 1;
conv.ask('Hi there, what can I help you with today?');
}

Starting code

// ...const agent = new WebhookClient({ request, response });

function welcome(agent) {
agent.add(`Welcome to my agent!`);
}
let intentMap = new Map();
intentMap.set('Default Welcome Intent', welcome);
// ...
agent.handleRequest(intentMap);
});

So where do I find that conv object? It’s actually hidden inside your agent. All you need to do is add this line after you initialize the agent:

let conv= agent.conv();

Note that the conversation object is not a constant. Moreover, the conversation object is available only for the Google Assistant. If you try to test this at the DialogFlow, it’s always going to return null. If you try this on the Actions on Google platform though, it works like a charm. The reason behind this is because there are 2 different libraries: the dialogflow-fulfillment and the actions-on-google.

Right: Testing on DialogFlow platform. Left: Testing on Actions on Google platform

Storage

There is a great article by Jeremy Wilken for storing session data and for storing data between the sessions, so I won’t go into deep here.

In short, there are 2 types of storing within Actions on Google:

  1. store session data; These data will only be available within that session → conv.data
  2. store data between sessions; These data will be available for a particular user across multiple sessions → conv.user.storage

Moreover, if you wish to access the user’s data from another platform or store them somewhere else, you can use either Firebase(1)(2) or your backend.

⚠️🛑

In some countries, if you want to access, or save information in userStorage, Firebase or your backend, you must use the Confirmation helper to ask consent to the user and obtain the consent before you can start storing information.

Surface Capabilities

Google Assistant is available in over a billion devices. From speakers and smart displays to cars and fridges, not all of them have the same outputs. It’s important when you build an action to take this under consideration and provide a unique experience for each of them. Once more the DialogflowConversation object has everything we need. To check the capabilities of the device this specific conversation is taking place we add the following to our fulfillment:

const hasScreen =
conv.surface.capabilities.has('actions.capability.SCREEN_OUTPUT');
const hasAudio =
conv.surface.capabilities.has('actions.capability.AUDIO_OUTPUT');
const hasMediaPlayback =
conv.surface.capabilities.has('actions.
capability.MEDIA_RESPONSE_AUDIO');

const hasWebBrowser =
conv.surface.capabilities.has('actions.capability.WEB_BROWSER');

Helpers

There are some things that you cannot ask your users without gambling. For instance, if you want to send a taxi to pick up your user, asking them orally their current address is not ideal. Helpers are specified intents that help in situations like this.

available helper intents: https://developers.google.com/actions/assistant/helpers#tab2

Real world examples

It’s great knowing that all of these exist and how they work, but what are some real use cases?

Session data (conv.data)

Session data can be very helpful in a lot of situations. If you want to make your action less annoying by repeating the same things over and over again, you can use it for counting the number your action wasn’t fulfilled and provide more useful prompts. It can help you keep track of a score at a game, or give some hints if it’s a quiz game!

const LIST_FALLBACK = [
`Sorry, what was that?`,
`I didn\'t catch that. Could you tell me which one you prefer?`,
`I'm having trouble understanding. Which one of those do you prefer?`];

const FINAL_FALLBACK = `I'm sorry I'm having trouble here. Let's talk again later.`;
function fallback(agent) {
conv.data.fallbackCount++;
if (conv.data.fallbackCount > 2) {
conv.close(FINAL_FALLBACK);
} else {
let response = LIST_FALLBACK[conv.data.fallbackCount];
conv.ask(response);
}
agent.add(conv);
}

Storage data (conv.user.storage)

This is what makes the action personalized for each user. From greeting your user with their name to remembering their pets name

Greet back your users
function welcome(agent) {
if(conv.user.storage.name === undefined) {
conv.ask(`Welcome to my agent! What's your name?`);
} else {
conv.ask(`Welcome back ${conv.user.storage.name}!`);
}
agent.add(conv);
}

function setName(agent) {
let user_name = agent.parameters.name;
conv.user.storage.name = user_name;
conv.ask(`Nice to meet you ${user_name}`);
agent.add(conv);
}

Surface capabilities

Asking my action to show photos from hotels. If a screen isn’t available, it will prompt me to show the results on my phone. When we open the phone though nothing happens
const { NewSurface } = require('actions-on-google');function showHotelPhotos(agent) {
const context = 'Sure, here are some images';
const notification = 'Hotel photos';
const capabilities = ['actions.capability.SCREEN_OUTPUT'];
if (hasScreen) {
conv.ask(new NewSurface({context, notification,
capabilities}));
} else {
conv.close("You can ask another time for the photos. Anything
else I can help with?");
}
agent.add(conv);
}

In order for our photos to show up we need to create a new intent with an event:

The event should be:

actions_intent_NEW_SURFACE

Then we can just handle that intent from our fulfillment as any other intent. We add our images along with any info we want to provide to the user and voila.

Helpers

For using the helpers there are also 2 steps: requesting permission from the user and getting back the results from the helper. An example request looks like this:

function getZipCode(agent) {... // check if we know it already.
const options = {
context: 'That\'s not a problem! To get the zip code from your
location',
permissions: ['DEVICE_PRECISE_LOCATION']};
conv.ask(new Permission(options));
agent.add(conv);
}

To get the result back from the permission request, we need to create again a separate intent where the event is actions_intent_PERMISSION.

function findStore(agent) {
const { latitude, longitude } = conv.device.location.coordinates;
let zipcode = ... ... // Get from Google's geocoding api
let nearestStore = ... ... // Get from your database
conv.ask(`The nearest store is at ${nearestStore}. It's
${distance} minutes ${means}`);
agent.add(conv);
}

Some more real-life examples can be found here.

Conclusion

The DialogflowConversation object is the ace you need to make a good experience great. At first, it can be a bit intimidating since sometimes works quite differently from the simple intent ← → webhook relationship you were using so far. Once you’ve done that a few times though, you’ll see that it’s just a few tricky parts you need to take care off and you’ll be all set.

Interested in finding out more ways to take your agents to the next level? Check out this talk. You’ll find some best practices and some lessons hard learnt from some of the actions I’ve build at Pixplicity.

--

--