Alkesh Srivastava
Dec 31, 2018 · 7 min read

Google recently announced support for Hindi, Marathi and several other regional languages for the Google Assistant. This allows the user to converse with the Assistant using the language of their choice. A majority of the population speaks more than one language and a having a bilingual assistant helps them in accomplishing tasks much more easily.

Now that you know how to build great Actions, you need to start thinking on how to increase your user base beyond the conventional English speakers. There is also the possibility that you might be missing out on a large number of potential users due to the vernacular barrier.

Localisation

This is where localisation comes into play. Localisation is nothing but creating customized voice experiences for people speaking different languages. By localising your Action, you get the chance to provide a voice experience that feels more natural and finds a deeper connection with the non-English speakers.

Some important concepts we need to be familiar with are —

  • Language: An agreed upon, spoken and written mode of communication.
  • Region: A defined physical area that’s usually associated with a country.
  • Locale: The combination of a language and region.

Note that throughout the blog, bilingual has been used instead of multilingual, but you can add any number of language to your Action. It’s just that for the purpose of this blog we will be adding only one additional language.

Localization also allows you to expand your user base to a completely new set of people and helps you reach the Google Developer Community Program milestone — Taking your Action Global.

Making your Dialogflow agent bilingual

First, you need to add language support to your Dialogflow agent. This can be done from the dialogflow console using the following steps —

The language that you initially used to build your agent is the default language. Since we have used English to build our agent, we initially had to define all the intents, entities and training phrases in English. You can add a new language by clicking on the ‘+’ icon

Select the language of your choice for making your agent bilingual. It is important to note that since Actions on Google and Dialogflow are two separate platforms, the number of supported languages may differ. You can find the list of languages supported by Actions on Google here.

After selecting the language, Dialogflow automatically copies all the intents and entities. Whenever you define any intent for either of the languages, it gets automatically copied in all the other languages. You only need to specify the training phrases for them.

It is always advisable to get a native speaker to do the translation. This allows you to preserve the persona of your Action while translating the training phrases and the responses.

If you are serving static responses directly from the Dialogflow, then that is all. Just test out your new language in the test simulator and you are good to go. But if you are using a custom webhook, you’ll need to specify locale-specific responses from the backend.

Making your webhook bilingual

For the purpose of this example, we will be considering a webhook which uses the NodeJS client library. In order to serve different responses based on the users’ locale setting, we will be using the i18n library for handling the dynamic responses.

The trick is to serve a different set of responses for different locales dynamically. The responses are stored in the JSON files and served according to the locale settings of the user. We can retrieve the value for user’s locale using conv.user.locale

Install the i18n library using npm install i18n and then use the following piece of code to configure it —

index.js

Note that our Action has its default locale set to en-IN. However, you may use any other language for the default locale. We have also defined the location to our locales folder.

4. Set the current locale to match the user’s locale so that we may use it to configure i18n.

index.js

5. Make a locales folder for storing the responses and create JSON files for the different locales that you will be supporting. The name of the JSON file should be in accordance with the IETF language tags. The prescribed format is to use key-value pairs for different locales. Also, the file names must match the locales that you are using. Since we have only two languages — Hindi and English, we will be using two JSON files

The directory structure for the project
en-IN.json

And the content for the Hindi language will look something like —

hi-IN.json

6. Now we can simply return the responses using the keys in the responses —

index.js

We can also pass parameters into the responses using String Literals

index.js

It is often a good practice to have separate key-value pairs for speech and text in your responses. This allows you to have flexibility over the content while customizing voice experiences for your users. It is also beneficial while specifying SSML in your responses.

Making your Action bilingual

The final steps include setting up your Action to support the additional languages. This will enable users to use your Action in more than one language.

In the Actions on Google console, hover over to the Overview section and click on Modify Languages.

Now you can choose any number of language that you wish your Action to support. Not only this, but you can also specify different locales for some of the languages — like English and French.

The addition of locales is a neat little feature and lets you serve users more customized responses. Words like color, aeroplane and lift might be more suitable for your British users but might seem a bit out of place for users based in the USA. Adding locale-specific responses allows your Action to establish a connection with the users based on their vernacular preference. It helps in delivering a great conversational experience for the users of your Action.

The final step is to add directory information in new language that you just added. You’ll need to translate the description (both long and short), training phrases and the privacy policy.

And that pretty much sums up the process of building a bilingual Action for Google Assistant.

Closing thoughts

We should always remember that designing great voice user experiences is more than just using Google Translate to translate all your responses into another language. Before you even proceed with localization, make sure that your Action is robust, fully functional and scalable. Having an Action that is past the MVP stage is crucial to the process of localization. A good practice is to write your tests before you proceed with localisation.

When building Actions that aim at solving region-specific problems, the conversation design process should take into consideration the vernacular preference of the users.

Whenever you decide to localise your Action, think of the user first. Think of what the user might want to accomplish using your Action in the language of their choice. Following a vernacular-first approach really tends to simplify the conversation design process. What is that the user wants? A recipe for ‘mashed eggplant’ or for ‘बैंगन भरता’? 😅


Voiceano is a voice interaction design and development studio that builds Actions for Google Assistant. If you feel that your brand needs a voice, then you should definitely grab a drink with us! 🍻😄

Voiceano

Voiceano is a voice interaction design and development studio that builds Actions for Google Assistant

Alkesh Srivastava

Written by

Co-founder at Voiceano | Machine Learning Engineer at Smalt and Beryl

Voiceano

Voiceano

Voiceano is a voice interaction design and development studio that builds Actions for Google Assistant

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade