STEP BY STEP

Cloudy, With a Chance of Voicebots

Designing a Weather Forecast AI App on the Promethist Platform

Anna Cranfordová
PromethistAI

--

Photo by NOAA on Unsplash

The Promethist Platform is a platform for the development of conversational AI applications. The platform runs on the cloud (all puns intended) and aims to make itself as accessible as possible by simplifying the user interface so that even those who do not have experience with designing AI applications may succeed.

The kinds of applications that can be created in the Promethist Platform are numerous, as long as they are conversational. One such example is a simple weather forecast app that one of our colleagues has developed for fun. This article will show you how to design such an application and maybe even inspire you to create something completely different and new.

Final Dialogue Graph of the Weather Forecast App

Getting started

All you need to do before designing your conversational application is to create a Promethist Platform account and open the Dialogue Designer. You may also watch the video tutorial on how to create your first application, which will show you the basics of how to use the platform:

If you have watched the tutorial or read our first step-by-step article about an app that tells knock-knock jokes, you know that every dialogue created on the Promethist Platform consists of various nodes that are connected with links into dialogue graphs, which represent the conversation between the bot and the user. But how to design an app that knows what the weather forecast is and is able to tell you that it is currently sunny in Sydney and overcast in London?

Global Intents and Functions

This dialogue makes exemplary use of the node called Global Intent. “Intent” represents what the user says (or is expected to say) and the Global Intent node ensures that the part of the dialogue that is connected to it will be activated whenever the user says what the node contains, that is, what we have written as its Examples in the context panel on the right side of the designer. However, the user doesn’t have to say the exact same words that we have written — the AI is smart and uses the so-called “intent recognition” to understand what the user has said and identify semantically similar utterances. This means that the AI will react correctly even when the user’s expression is different from the one we have defined. We can also help the AI by including more possible formulations with the same meaning.

Global Intent

There are three different Global Intent nodes in our weather app, each connected to a different Function node. The Function nodes are the key parts of the dialogue, and they are also the tricky parts that need a bit of coding.

The first Global Intent contains various examples of how the user might ask about the weather forecast. This node is connected to a Function node which recognizes whether it can see the user’s location. It depends on whether the user has enabled location sharing on their device.

Function Node

If the location is available, the bot responds with the Speech node that gives the weather forecast immediately, using the variable #weatherDescription which allows the AI to fill in the relevant weather forecast. How is it able to do this? It’s the code again. The application is connected to https://openweathermap.org/ through an API (Application Programming Interface) which enables the interaction between two applications, thus allowing our weather app to draw information about the weather from this server.

Speech Node

If the location is not available, we don’t despair: one of the great advantages of conversational AI is that if it lacks some information from the user, it can simply ask. The users are not the only ones who can have questions! And so the bot responds with a Speech node that says “I can’t see where you are located,” and in the following Speech node it asks “Which city do you want to know the weather forecast for?” Afterward comes a User Input node followed by two Intent nodes. The first one is there just in case the user does not answer the question correctly and asks about the weather instead so that the AI can direct him back to the question again. The second Intent node expects the user to either ask about the weather forecast again, this time specifying the town, or to simply answer with the name of the town. The Examples include sentences with the variables (Prague|London|Paris), but it works for other cities as well — the AI really is smart!

User Input and Intent

Both this Intent and the second Global Intent that includes questions in which the user already asks about the weather forecast for a particular city, lead to a Function that is programmed to determine the weather forecast based on the entity containing the name of the city.

Function Node

The third Global Intent is activated when the user asks an additional question about the temperature. The AI remembers what city was mentioned last and provides the temperature thanks to the following Function:

Third Global Intent and Function

The bot answers in the following Speech node with “The current temperature in #locationName is #current degrees with a high of #high degrees and a low of #low degrees.”

All the final Speech nodes from these Global Intent dialogue paths lead to a single User Input. At this point the user can ask about the weather forecast for another city or about the temperature. They may also say “Stop” which is defined in the linked Intent node followed by the Speech node where the bot says “Bye” and consequently ends the session with the node Exit. If the user doesn’t say anything, Global Action comes into action.

Global Actions

The Global Action node is similar to the Global Intent node, with the difference that it responds to something that happens in the dialogue rather than something the user says. Global Action can be defined by defining the particular action you want the node to react to in the context panel. This way you can have different Global Action nodes in your application so that the bot is able to respond to various situations.

You can have the action #error for instances where an unexpected mistake occurs, which might happen when the AI does not understand the user’s utterance, or, in the case of this dialogue specifically if the API does not respond to our weather forecast request because of a network connection error. If we didn’t have this Global Action in the dialogue, it would only announce an error and the session would end. The #error Global action node gives us the opportunity to continue in the conversation by the AI reacting with the connected Speech node.

Another useful Global Action is the action #silence for when the user does not respond, which prompts the bot to repeat its last question or react with a particular response that we write in the Speech node following the #silence node. You can read more about the usage of the action #silence in our article on How to Make Your Voice Assistant React Properly to Silence?

Global Actions

After Global Action comes the Speech node which determines the bot’s response, such as “Uh no, I think I’m lost. What was it again?” after the action #silence.

Lastly, something which comes at the very beginning of the dialogue is the #intro Global Action. This enables the session to either start by the bot talking, or by the User Input which is connected directly to the Enter node.

Now we’re ready to go and the AI can start the conversation by saying “You can ask me about the weather forecast.”

Check the Weather Forecast Application here:

https://bot.flowstorm.ai/608fc1d69456da0e65b36eee

Did you find this article helpful? Please let us know in the comments.

Start creating your very own applications in on the Promethist Platform, it’s a breeze!

Would you like to follow our journey? Follow us on Facebook, Twitter, YouTube, Instagram, and LinkedIn.

Check out the Promethist Platform for creating smart conversational AI applications and virtual personas.

Enjoyed the article? Click the 👏 below to recommend it to other interested readers!

--

--