Magnificent Escape Action Update

Leon Nicholls
Google Developers
Published in
11 min readJan 8, 2019

In a previous post, I discussed the design and publication of my escape-the-room Action called “Magnificent Escape”. Since the game was launched for the Google Assistant, various improvements have been made which are discussed in this post.

Regular releases

Since the Action was launched on October 26, there have been 7 updates published. These have included improved gameplay, many additional training phrases for intents, and some bug fixes.

Dialogflow automatically versions agents that are used for Actions on Google. However, fulfillment isn’t versioned. So, I keep 2 Dialogflow agents for development: one with the development fulfillment URL and one with the production fulfillment URL. To prepare for each release:

  • I update the production fulfillment first
  • Then export the agent as a zip file from the development instance
  • Import the zip to the production instance
  • Then test the production agent and its fulfillment together
  • The production instance is then released for review to Actions on Google

The Action on Google reviews for these releases were all completed in about a day.

Screens

For devices with screens, suggestion chips are displayed and suggest things the user can do in the game.

As the user explores the room, these chips adapt to be contextual. However, the gameplay is open-ended. The user can explore the room in many different ways and can change her mind at any point. Since a maximum number of 8 suggestion chips can be displayed at a time, the suggestions are not exhaustive of what the user can do at any point in the game. Some users thought that they could only pick from the suggestions and felt “stuck” in the game.

To improve the user experience for screens, the fulfillment logic for the suggestion chips was modified to only show specific chip values when a specific question is asked, like which direction to look. During other stages of the game, the suggestion chips would default to generic values, like “hint”, “help”, “lobby”, and “exit”. This would encourage the user to learn how to play the game.

Improved gameplay

Since the launch, I’ve been using Dialogflow’s Training feature to find unmatched intents and its History feature to look at how users are playing the game. Although users were escaping the rooms, it became clear that the gameplay needed to be improved.

For users who know how to play escape-the-room games, they appear to be very determined to explore everything in the room until they eventually escape. However, for other users, it wasn’t clear what the goal was. To help with that, the first time a user enters the first room, the prompt was improved to make it clear what item the user needs to find to escape the room:

“You teleport into the office. You can see a locked door. You need to find a key to escape. You’re facing south. You can also look north, east, and west. Which direction do you want to look?”

Originally, the Action would drop hints during the game about things to try. However, these would be generic, like suggesting the directions to look. The gameplay was improved by rewarding the user with more specific hints if they do certain actions in the room. The more the user explores the room, the more hints are earned.

Each room also has an easter egg. Originally, the easter eggs would give the user a gold coin as a reward. However, some users got confused about what to do with the coin. The easter egg was changed to give a special hint on how to solve the room’s puzzles.

There were also some red herring items in the rooms, but these became too distracting so they were removed. When the game is played by voice, the user has to imagine the items in the room, and anything that distracts from that breaks down the gameplay. For voice especially, designing for brevity and eliminating diversions is very important.

Experiments

In the previous post I mentioned an A/B experiment using the Google Analytics Measurement Protocol to determine if giving users a special welcome message when invoking the Action using the ‘Play Game’ built-in intent makes a difference.

Here is a snippet of Node.js code for creating experimental events using Google Analytics:

const request = require('request');
const urlencode = require('urlencode');
const GOOGLE_ANALYTICS_URL =
'https://www.google-analytics.com/collect';
const VERSION = 'v';
const TRACKING_ID = 'tid';
const CLIENT_ID = 'cid';
const HIT_TYPE = 't';
const EVENT = 'event';
const EVENT_CATEGORY = 'ec';
const EVENT_ACTION = 'ea';
const EVENT_LABEL = 'el';
const EXPERIMENT = 'experiment';
const options = {
url: GOOGLE_ANALYTICS_URL,
json: false
};
let body = VERSION + '=' + 1;
body += '&' + TRACKING_ID + '=' + this.key;
body += '&' + CLIENT_ID + '=' + urlencode(uuid || 'anonymous');
body += '&' + HIT_TYPE + '=' + EVENT;
body += '&' + EVENT_CATEGORY + '=' + urlencode(id || 'none');
body += '&' + EVENT_ACTION + '=' + EXPERIMENT;
body += '&' + EVENT_LABEL + '=' + urlencode(count || '0');
options.body = body;
request.post(options, function (error, response, body) {
if (error) {
logger.error(`post: ${error}`);
} else {
logger.debug(`post: ${JSON.stringify(response)}`);
}
});

The results are in, and the data shows that more users leave the game within the first minute if they don’t get the intro message. Also, more users play the game longer when they get the message. So, I’ll be making the special intro message a permanent feature of the Action. It would be interesting to see if other developers have a similar experience with their Actions.

Also, when those users leave the game, the exit audio prompt tells the user how to invoke the Action again explicitly:

“OK. To play again, just say ‘talk to magnificent escape’. Let’s try this again later.”

Now that the technical aspects of implementing A/B testing is working, I’m considering various other experiments to improve the Action.

Analytics

Analytics has proven to be a valuable source of data for understanding how well the game is performing and where the design could be improved. In a previous post I covered the basics of how to use various analytics tools with Actions.

Events data in Google Analytics track how many users escape the rooms. Each room has a difficulty level (“office” is easy, “bedroom” is hard and “garage” is super hard). However, looking at the stats for the number of users who escaped, the bedroom might need to be labelled as the easiest.

Contextual commands

The logs occasionally show that users say things like “open it” or “move it”. The Dialogflow agent didn’t have intents to handle these user phrases and would respond with one of the fallback prompts.

USER: look at the drawer

ACTION: The drawer is closed. What will you try now?

USER: open it

ACTION: Sorry, I didn’t understand that. You can look around or use items. What do you want to do?

The user’s intention was clearly to do an additional action on an item that was just explored. The game already keeps track of the current item and its state, so it would be relatively simple to assume that the last item is the missing context for the user’s request. Several new intents were added for the common use cases to make these contextual commands work more naturally:

USER: look at the drawer

ACTION: The drawer is closed. What will you try now?

USER: open it

ACTION: Nice work. As you pull the desk drawer open, you find a toothpick inside and you pick it up. So, what’s next?

More sounds

I’ve gotten feedback from various users that the music and sounds in the game are very effective, especially the sound effects when the user interacts with items.

I’ve added more sounds and replaced some of the effects with even better sounding versions.

However, as a user is exploring the room, she might spend some time without hearing any sounds. To provide feedback on the user’s progress, I thought it might be interesting to experiment with a sound at the beginning of each turn in the conversation. I settled on a short sound with several simple notes. To me, it feels uplifting, and it’s also a gentle nudge to maintain the game momentum.

More intents

The Dialogflow History feature has resulted in a number of improvements to intents for the agent. The Dialogflow agent now has 122 intents and 12 entities.

Most of the changes to the intents have been to expand the user phrases for each intent to improve the machine learning training and handle more variations in the user input.

Small talk

I have noticed in the logs that users occasionally say things that are not directly related to the gameplay, like saying “hello”. Dialogflow has a feature called Small Talk, which provides some convenient intents for handling casual conversational responses. However, simply enabling all of its intents can cause unexpected interference with your agent’s other intents. It might make sense to pick just some of these intents for your own agent.

So, I imported the Small Talk prebuilt agent into its own Dialogflow agent. By hovering over each intent in the Dialogflow GUI, I saved each intent to a JSON file. I then only imported those intents I wanted into my agent by using the Upload Intent menu option next to the Create Intent button. The static responses for each of these were then edited or moved to fulfillment.

Selecting options

In the previous post, I mentioned that the agent has intents to allow the user to select a room in various ways, including the room name or its ordinal position in the list. Each room also has a difficulty level, and some users pick the room using that: “go with easy”. So, I added a new entity for each of the difficulty levels and their synonyms. These entities’ values were then referenced in a new intent for selecting the room by difficulty level.

Languages

Currently, the game only supports English, but it’s clear that users from all over the world are playing the game. Users are also asking the game to change its language:

“do you speak hindi”

“talk spanish”

A new intent was added to respond that only English is supported.

User reactions

It’s interesting to see how users react when playing the game — from playful responses, to negative responses, to uninspired responses.

For each of these, new intents were added that respond appropriately and attempt to encourage the user to keep playing:

USER: bla, bla, bla

ACTION: yadda yadda yadda. So, what will you do now?

USER: this is bad

ACTION: Sorry you feel that way. You might want to retrace your steps. You might’ve missed something. What will you try now?

USER: this is boring

ACTION: Sorry to hear that. Did you know there’s also an easter egg hidden somewhere in this room? You can find it! So, what’s next?

I also noticed that some users would explore the items in the room and then say something like “forget the desk” or “forget that”. It seems that users were thinking aloud as they played the game. Unfortunately, the game would respond with a fallback response about not understanding what the user was trying to do. To improve the experience, a new intent was added that would respond with just a simple confirmation:

“OK. What would you like to try next?”

Orientation

The user is told at the beginning of the game that each room has 4 walls: north, south, east and west. This is the main way to navigate each room. However, users also use “turn left” or “look behind me”. Users also might want to just take a look around with “look around the room” or “describe the room”.

Users also might just want to be reminded of what items they’ve found or collected with “where is the door” or “what items do i have”.

Specific intents were added that support these types of user requests and their variations, even though they are not part of the game’s instructions.

Proactive support

The game provides hints and lets users ask for help if they get stuck trying to escape the rooms. However, there are other indications that users are struggling or need help that could be proactively detected in fulfillment.

One signal might be if the user asks the same question in sequence. An intent might match the user input, but it might not be what they expect, so it’s natural for users to ask the question again. A utility method was added to the fulfillment logic to store the last user request in the session storage and then detect when the same raw user input is repeated in sequence. The utility then short circuits the normal intent-handling logic to provide a specific response to help the user get back on track:

“Looks like we are talking past each other. You can look in different directions or look at the items you’ve found. What do you want to try now?”

For some of the puzzles, like opening the safe, it might be clear that the user is guessing or trying the wrong moves for too long. The fulfillment can detect this and interrupt the interaction with instructions on how to find a hint in the room to solve the puzzle.

Retention

To help with increasing retention, the exit prompt when users cancel the Action is customized based on how much time the user has spent in the game or how much of the room has been explored.

Note that all cancel prompts are limited to 60 characters.

For first time users, we make it clear that the game state is persisted so they can continue playing later:

“OK. Pick up right here next time. Come back soon.”

And add various variations to encourage the user to keep playing the game:

“Sure, but you were so close to finding the easter egg.”

“OK, but you were so close. Come back soon.”

“OK. And you were doing so good. Hope to see you soon.”

Mistakes made

From looking at the logs, I saw there was occasional incorrect intent matching, which was due to the same training phrase repeated in multiple intents. After doing an audit of all intents, I found other cases and removed the unnecessary duplicates.

For the intent to handle user requests to change the language, I used the same parameter value in all the user phrases, which caused some user responses to not be matched correctly. It’s important to vary the parameter values used in the user phrases to make the machine learning for the agent work better.

Next steps

In this post, I showed how important it is to keep maintaining your Action after it’s launched by studying how your users interact with your Action.

I’ll continue with my daily audit of the Dialogflow history logs to keep improving the intent handling.

Also, to support digital purchases, I’m working on a way to add more rooms.

Try playing my game and tell me what improvements you would recommend!

Want more? Head over to the Actions on Google community to discuss Actions with other developers. Join the Actions on Google developer community program and you could earn a $200 monthly Google Cloud credit and an Assistant t-shirt when you publish your first app.

Edit: Read how Magnificent Escape was open sourced.

--

--