Build an AI-Based Telegram Bot with Rasa

Check out this step-by-step guide to developing a Rasa-based Telegram chatbot run locally from a Docker container.

Ivan Kunyankin
6 min readSep 7, 2020

In this article, I’ll be sharing the details of developing a Rasa assistant with custom actions (abilities) run locally from a Docker container and integrated into a Telegram bot.

The best part is, you won’t need to install Rasa (which can be challenging) or even clone any repository. You’ll be able to reproduce the same results just by following this step-by-step guide.

Photo by Christian Wiediger on Unsplash

Here’s what you’ll learn in this article:

  1. What is Rasa
  2. How to run and test Rasa assistant locally from a container
  3. How to integrate into a Telegram bot
  4. How to expand your bot’s capabilities with custom actions

1. What is Rasa?

As Rasa’s site states: “Rasa Open Source is a machine learning framework to automate text- and voice-based assistants.” So Rasa is a great tool for building smart assistants that you can integrate seamlessly into almost any platform.

This platform has a decent level of customization, providing a full set of ready-made solutions. It also allows you to incorporate and use your machine learning models, custom code, and external APIs.

2. How to run and test Rasa Assistant locally from a container

Rasa provides developers with detailed tutorials. But when I wanted to follow these tutorials, I faced several errors. That’s why I’m writing this article — to help those struggling with the same issues.

Step 1: Install Docker in your system

Free image from https://www.docker.com
  1. Download the image and initialize your project
  2. Run the following command from your projects directory:
docker run -v $(pwd):/app rasa/rasa:1.10.8-full init --no-prompt

Executing the Docker run command will download the corresponding Docker image if it can’t find it in your system.

The command will initialize the project and generate the needed files for a template assistant, let’s call it that. This bot has just a few intents, but you can already talk to it with a special tool — Rasa shell.

To get this tool, run the following from the project’s directory with your command line:

docker run -it -v $(pwd):/app rasa/rasa:1.10.8-full shell

Your bot doesn’t know much yet, but it’s a good starting point. I recommend you walk through tutorials and get familiar with the different files your assistant needs.

Now let’s make your bot available from the internet — this is great if you want to give it for testing to friends without having any servers for deployment. For this, we’ll need Ngrok — a program that can provide you with a secure URL to your localhost server.

Step 2: Download the archive

Download the archive here, unzip it, and run:

./ngrok http 5005

This command will generate a public URL and attach it to port 5005. We’ll connect this port to the same port of our container later.

Step 3: Run the container with a special flag

When you run the container with a special flag, it’ll send requests to the bot:

docker run -p 5005:5005 -v $(pwd):/app rasa/rasa:1.10.8-full run --enable-api

If everything is done correctly, you can send post requests from any computer to the URL Ngrok generated. You can use programs like Postman for this. Ensure you send the request body in JSON format. For example:

{“sender”: “1”, “message”: “Hey”}

3. How to make your own AI-based Telegram bot

Follow these steps to make your own AI-based Telegram bot.

BotFather. Image by author
  1. Find BotFather on Telegram, text it, and follow its instructions. By the end of the conversation, you should have a token and a bot — which isn’t very mighty yet
  2. Make slight changes to the .yml credentials, that contain details for connecting to other services
  3. Insert the following into the file (and don’t forget to fill in the fields with your info!)
telegram:
access_token: token received from BotFather
verify: specified bot name
webhook_url: “https://url_from_Ngrok/webhooks/telegram/webhook”

The bot will only run after running Ngrok, so we can provide Rasa with the generated link.

4. Run the container. It’s important to specify the ports:

docker run -p 5005:5005 -v $(pwd):/app rasa/rasa:1.10.8-full run --enable-api

5. Chat with your assistant 😄

Functionality demonstration. Image by author

Great! Now you have a fully-functioning chatbot at your disposal. Right now, though, it can only cheer you up with a cute picture. But what if you want to make it more powerful? You can do that with custom actions.

4. How to expand your bot’s capabilities with custom actions

Actions are the things your bot runs in response to user input. Every bot can perform several types of actions. For example, it can reply to you with a message or, more interestingly, execute your custom code, called custom actions.

Your custom code is used as an endpoint run on a separate server to which you’ll send requests when classifying a user’s message as a corresponding intent.

Rasa provides a convenient tool that helps you with this — Rasa SDK. Again, you don’t need to install anything.

Let’s teach your bot to send a joke to users to cheer them up.

Follow this step-by-step list:

  1. Create a new folder called “actions” inside your project’s directory and move actions.py there
  2. Create an empty file __init__.py inside the actions directory, as Rasa SDK expects a python module
  3. Insert into the actions.py file:
import requests
import json
from rasa_sdk import Action


class ActionJoke(Action):
def name(self):
return "action_joke"

def run(self, dispatcher, tracker, domain):
request = requests.get(
'http://api.icndb.com/jokes/random'
).json() # make an api call
joke = request['value']['joke'] # extract a joke
dispatcher.utter_message(text=joke) # send the message back
return []

4. Replace utter_cheer_up actions in data/stories.md with your custom action action_joke to tell your bot to use this new action

5. Add a section for custom actions in domain.yml, including your new action:

actions:
— action_joke

6. Retrain your model after updating your domain and stories:

docker run -v $(pwd):/app rasa/rasa:1.10.8-full train

7. Tell Rasa the action server’s location to instruct the Rasa server to use this location. Add this endpoint to your endpoints.yml, referencing the name you want to give the server:

action_endpoint:
url: “http://action-server:5055/webhook"

Notice that the action server runs on a different port than the assistant itself.

8. Create a new container for the different server you’ll use for your action. The simplest way to handle more than one container is to use the Docker network:

docker network create rasa_bot_network

9. Run the action server first. It’s important to specify the ports:

docker run -d -p 5055:5055 -v $(pwd)/actions:/app/actions --net rasa_bot_network —-name action-server rasa/rasa-sdk:2.0.0a1

10. Run the assistant with the following command (Don’t forget to run Ngrok and specify the provided link in credentials.yml like before):

Don’t forget to run Ngrok and specify the provided link in credentials.yml like before.

docker run -p 5005:5005 --net rasa_bot_network -v $(pwd):/app rasa/rasa:1.10.8-full run --enable-api
Functionality demonstration. Image by author

You’ll notice in the screenshot that we have buttons for a faster choice between different options. And it’s easy to implement these buttons. Follow this Rasa tutorial.

We covered just a small section of the available features of Rasa’s assistants. Check out their tutorials to learn more about the features.

As always, I hope this article was useful to you. Please let me know if you have any questions. You can also reach out to me via LinkedIn.

--

--