Programming
My VS Code Setup To Prototype Rasa Chatbots
Some VS Code Features To Streamline The Development Workflow

Introduction
In this article, I will share my workflow for creating conversational AI agents using the Rasa framework. It uses VS Code and Docker to automate repetitive tasks and code in an OS-agnostic way.
The code to reproduce the results can be found here.
I assume the reader has a basic familiarity with Rasa and Docker.
Prerequisites
You will need to have the following software installed to follow along:
The Setup
Overview
Prototyping a chatbot on the Rasa framework revolves around 3 distinct tasks:
- Writing (and debugging) custom actions
- Training a model to perform NLU and dialog management
- Chatting with the bot to verify the correctness
The goal of this workflow is to make it easy for developers to alternate between these 3 tasks. This will involve setting up the following:
- Creating an action server
- Defining the required services
- Creating a new Rasa project
- Setting up the debugging infrastructure for custom actions.
Step 1: Creating An Action Server
First, we will need to define an environment that our action server will live in. This environment will contain code that runs the action server and any dependencies that our custom action will need.
For example, we can write the following in a file named dev.Dockerfile
:

Figure 1 says we will use rasa-sdk
as our action server and to install the fuzzywuzzy library as a dependency (useful for dealing with typos).
Writing custom actions will involve a lot of coding and testing, so we should install a few libraries to make our lives easier and apply software engineering best practices. For example:

The interested reader can look up what the black, debugby, and pytest libraries do.
Step 2: Defining The Required Services
A functioning bot will, at a minimum, need a rasa server to host the bot and an action server to run custom actions.
For the purpose of prototyping, we can define the following services in docker-compose.yml
:

Figure 3 defines a service named rasa-server
that does nothing but has all the dependencies needed to host a bot. It also defines a service named action-server
that starts an action server in debug mode.
Step 3: Creating A New Rasa Project
To create a new Rasa project, we first need to bring up the services we defined in the previous step.
We could open a terminal and execute docker-compose up
but a more convenient approach is to use VS Code’s command palette:

Then, in a terminal, we execute:
docker-compose exec rasa-server rasa init
to create a new project.
Step 4: Setting up the debugging infrastructure for custom actions
This step involves creating a launch configuration to launch a process that will attach to the action server.
To do so, simply create a file named launch.json
under a directory named .vscode
in the project’s root folder.
The content of launch.json
should look like this:

Some Shortcuts
Based on Step 3 in the preceding section, it should be clear that you can use the Rasa CLI by executing in a terminal:
docker-compose exec rasa-server rasa <command>
where <command>
is any command supported by the CLI.
But it gets tiresome to have to open a terminal and type such a long command, especially if we want to do it repeatedly, e.g., training the bot, chatting with the bot, validating the dataset, etc.
The solution to this problem is to create a VS Code Tasks by creating a tasks.json
file in the .vscode
directory that will contain the tasks that we plan to execute frequently.
For example:

tasks.json
fileFigure 5 defines 3 tasks, namely:
Rasa: Launch Shell
which executesrasa shell
Rasa: Launch Shell (debug)
which also launches the rasa shell but with the--debug
flag setRasa: Train Bot
to executerasa train
Now we can leverage the command palette to launch any task defined in tasks.json
.
For example, to train the bot, we first use to command palette to call the Task: Run Task
command:

Then, we start typing a few characters from the name of the task we intend to execute i.e. Rasa: Train Bot
to quickly narrow down the list of available Tasks.

Sample Workflows
This section will describe the following workflows:
- Writing a custom action.
- Debugging a custom action
Writing A Custom Action
The first step to write a custom action is to call the command palette to attach to the container containing the action-server
:


Once attached, open the /app/actions
folder.
Finally, install the Python extension.
You now have the ability to write Python code with full support from VS Code, e.g., linting, auto-complete, running unit tests, etc.
Debugging A Custom Action
Let’s imagine that we have a buggy custom action, and we want to understand the state of the action in real-time to figure out the source of the bug.
For simplicity, we will use the default custom action that came with rasa init
. We just need to uncomment it in the actions.py
file:

We’ll need to make this action discoverable by the bot. This involves editing the following files:
- endpoints.yml
- domain.yml


We also need a story that will trigger the action. For simplicity, we’ll define a rule where this action is run when the user says goodbye to the bot (just edit one of the predefined rules in rules.yml
):

We are now ready to begin the debugging process.
First, launch the Rasa: Debug Action Server
configuration we defined in Step 4. One way to do this is to use the command palette to launch the Debug: Select and Start Debugging
command and select Rasa: Debug Action Server
:


Next, insert a breakpoint in the custom action:

At this point, if you launch rasa shell and utter “bye”, the action_hello_world
action will get executed, and the VS Code will pause the execution at line 25:

We are now free to examine the environment (e.g., see the Call Stack and Variables pan) and execute arbitrary Python code using the Debug Console.
Cleaning Up
Remember to call docker-compose down
from the command palette when you are done for the day:

Benefits
The two main benefits of this setup are:
- Machine independence — it is trivial to use VS Code to develop the code locally or on a remote machine (via SSH or attaching to a remote container). This comes in handy when you need to train the bot remotely, e.g., using cloud GPUs.
- Ease of deployment — the
docker-compose.yml
file can serve as a blueprint for your DevOps colleagues to figure out what it takes to deploy the bot on their custom infrastructure and integrate with their CI/CD pipelines.
Conclusion
This article has described my workflow for prototyping a Rasa chatbot using VS Code and Docker containers. I hope you have found it useful.
Let me know in the comments if you found room for further automation.