Mining Your Actions History

Dialogflow has a valuable feature called History which shows a high-level view of the conversations between your users and your Action. Once your Action is deployed, this page is an important tool to analyze how users are interacting with your Action and how your conversational design might need to be improved.

Lessons learned

We recently launched sample Actions, which let you create working Dialogflow agents with a few clicks. These agents implement our best practices for Actions on Google. To ensure these samples comply with our own review requirements, we created instances of each of the sample Actions and put them through the public review process.

We’ve been studying the history pages for each of these deployed Actions, and we have found some interesting results. Here is the history page for the sample Action about using session state (this Action is a very basic interactive adventure game):

The history page shows that this Action is triggered in different ways:

Some of the rows in the page have warning icons:

  • The orange-colored icons indicate a mismatched intent. These mismatches happen when your agent fails to match the user utterance to a particular intent, which is usually due to not finding matching intent training phrases or matching entity values.
  • A red-colored icon indicates a fulfillment error, which is typically a crash in the webhook call (see our post on debugging errors).

For the mismatched intents, we found some interesting cases. Here is an example of unexpected input from the user:

Since none of the intents got matched for the user input “read my mind”, the default fallback intent got invoked. The sample Action implements best practices with an error recovery strategy to get the conversation back on track or exit the Action if the user interaction stays off track.

This unmatched user input might be a signal to improve the design by either adding more training phrases, adding more entity values, or even adding new intents to handle this user input.

Other interesting examples are about the different ways the user can select options. In this game, the user is prompted to select one of two doors in each room. We’ve noticed that users might respond with the full door name, like “green door”, or partially, like “green”, or with additional words, like “of course blue door”.

The entity for these options has been designed to handle partial matches:

We also have many intent training phrases for the various ways the user could possibly select an option:

Other interesting ways users select options are by their position, e.g. “the first one”, “second door”, or “door number 1”. Since these responses don’t contain the entity for the door name, the intent for matching options wouldn’t be triggered. What is needed are new intents that are trained with phrases that will match the various ways the user could state positions.

We found the same issue when testing our early template design for trivia games. For that game, we added several intents, with training phrases like these:

Here is another interesting conversation:

The sample Action has an intent to handle the user request to go back in the game:

However, it seems we might need to add another training phrase to handle the user input “go return” with this intent. The default fallback response would have prompted the user to clarify the request. Based on the history, we can see the user restated the request with “I want to go back”, which our intent handled correctly. In this case, the error recovery strategy worked quite well.

Study your Action history

We’ve touched on several interesting results from our own Action and discussed how to improve its design. We highly recommend that you regularly mine your history page for valuable feedback, and adjust the design to reduce the number of mismatched intents. Take a look at the training feature which provides a GUI to let you easily add unmatched user inputs to existing intents.

A good starting point for your Action is one of the sample Actions, which you can then extend and customize for your particular use case. This will ensure that you start with a well-designed Action that implements many of our best practices.

Want more? Head over to the Actions on Google community to discuss Actions with other developers. Join the Actions on Google developer community program and you could earn a $200 monthly Google Cloud credit and an Assistant t-shirt when you publish your first app.