Rasa Chatbot, Node Red and web interfacing at speed

Charles Copley
The Patient Experience Studio at Cedar
6 min readAug 11, 2020
Photo by Joshua Ness on Unsplash

Imagine you could build a system that can automatically reach out to users to collect feedback? This is possible today and this blog post will outline how to do it.

In this post I will take you through how you build a chatbot using Rasa Open Source. The post was inspired by a series of great blog posts on this topic, but adds in a little of my own experience in doing the deployment.

Rasa provides a framework that is understandable and intuitive, avoiding blackbox approaches and making it possible to get something up and running very quickly. It also provides intuitive tools that are accessible to non-technical people. This empowers everyone to improve the system incrementally, meaning that you won’t be stuck with a system that needs an engineer to improve.

Getting started

The first decision you need to make is where do you host the chatbot? A number of suggestions will suggest using ngrok and run the chatbot locally, but, on balance, I have preferred to do this on my own cloud host which allows you to be more security conscious from the outset. I have always used Vultr (mostly because the instances are reasonably priced) and did the same here. I got a Ubuntu 16.04 loaded 80GB, 2 CPU, 4GB RAM machine for $20 a month which is pretty reasonable.

Where do you want your machine to sit?
I used Ubuntu 16.04
I used the 80GB, 2CPU, 4096MB Memory Machine
Security is a very large and complex topic. At the very least limit incoming connections to SSH and a port that can be connected to an external service (e.g. Twilio).

Once you have the machine up and running, you can connect to the machine with ssh (e.g. ssh root@155.138.207.212) into the machine. Now you can set up the infrastructure.

The first of these is the actual chatbot. In this post we use Rasa Open Source. We also need a way to have the chatbot interact with the outside world. Usually this is quite painful, but Node-Red is an intuitive architecture for handling the incoming and outgoing interactions with the chatbot.

A typical Node-Red Data flow

Node-Red allows you to handle incoming packets, manipulate those packets and pass them around as you need to. This makes it easy to provide access to your bot through different communication modes. For example, if you wanted to be able to text your bot and chat to it from a phone, then integrating your chatbot with something like Twilio (for text message interfacing) is really easy.

Setting up Docker, Node-Red and Rasa

For the purposes of reproducibility, Docker provides a great way to modularize services. For those that haven’t really used Docker, I’d encourage you to read more, but this should get you up and running.

First, make sure that docker is installed on the Vultr machine.

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"sudo apt-get updatesudo apt-get install -y docker-ce python-minimal curlsudo curl -o /usr/local/bin/docker-compose -L "https://github.com/docker/compose/releases/download/1.15.0/docker-compose-$(uname -s)-$(uname -m)"sudo chmod +x /usr/local/bin/docker-compose

Great! Now lets lay things out to make it easier for use. First we make a directory to hold the chatbot architecture.

root@chatbot:~# mkdir rasa-bot-docker
root@chatbot:~# cd rasa-bot-docker

Next we need to define the services that we want to use in a docker-compose.yml file. This file will determine how things get launched, and is usually kept in it’s own folder (see rasa-bot-docker below)

docker-compose.yml

version: '3.0'
services:
rasa:
image: rasa/rasa:latest-full
networks: ['rasa-network']
user: root
ports:
- "5005:5005"
volumes:
- ./app_mount:/app
command:
- run
- --enable-api
- --endpoints
- endpoints.yml
action_server:
image: rasa/rasa-sdk:latest
networks: ['rasa-network']
ports:
- "5055:5055"
volumes:
- "./app_mount:/app/actions"
duckling:
image: rasa/duckling:latest
networks: ['rasa-network']
ports:
- "8000:8000"
node-red:
image: nodered/node-red
user: root
ports:
- 1880:1880
volumes:
- "./node-red-data:/data"
networks: {rasa-network: {}}

You can start the system moving with

root@chatbot:~# docker-compose up

At this point you should be able to connect to your version of Node-Red through a web browser. You can get there by noting down the IP address assigned to your machine on Vultr (e.g. 155.138.207.212) , and navigating to 155.138.207.212:1880 (remember our firewall opened up the 1880 port, so if you can’t get in, you might want to check that the firewall is setup correctly). If all is working, you’ll see this on your browser

You can check that Node-Red is running by using the browser to navigate to the server_ip:1880

If you have gotten this far, great! Next step is to set up the Rasa server. First thing is to make sure that a simple model is initiated.

root@guest:~/rasa-bot-docker# docker run -it  --user root -v $(pwd)/app_mount:/app rasa/rasa:latest-full init
We initalize the bot architecture, and make sure we train it with the demo data.

Let’s interact directly with the bot using a web interaction through the command line - we’ll send it a simple message of ‘Hi’ using curl

root@guest:~/rasa-bot-docker# docker-compose stop
root@guest:~/rasa-bot-docker# docker-compose up -d
root@guest:~/rasa-bot-docker# curl --request POST --url http://localhost:5005/webhooks/rest/webhook --header 'content-type: application/json' --data '{ "sender": "+14752339896", "message": "Hi"}' | python -mjson.tool

Awesome! it seems to work. Next step we need to integrate it with external services. This is where Node-Red comes into play. We’re going to send data to a Node-Red endpoint on port 1880 (see below)

root@guest:~/rasa-bot-docker# curl --request POST --url http://localhost:1880/test_endpoint --header 'content-type: application/json' --data '{ "sender": "+14752339896", "message": "Hi"}' | python -mjson.tool

To handle this, we need to create a Node-Red instruction set. You can see this below- The icon on the left is an http_in node, the green one is a debug node and the one on the right is a http_response node.

Now when you send a message to the test_endpoint, it should get mirrored right back to you!

Ok! So this simulates another service (e.g. Twilio) sending us data on port 1880 (the Node-Red port). All we should have to do now is hook up the http_in node to the chatbot, and we should get a response. To do this we just add the http_request node and point it at the chatbot.

Boom! There is the chatbot response!

Thank you for reading so far! I hope that this introduction to the Rasa infrastructure has been useful! In future posts I will

  1. Discuss modifying the Rasa data used in training the model
  2. Demonstrate how we can integrate external services (e.g. Twilio for text messaging, Slack etc.) to the Rasa service using the Node-Red interface
  3. Extend Rasa with Rasa-X, and show how you can set this up for yourself.

--

--