Creating conversational AWS Lex Bot tests
There exist several ways of testing Amazon AWS Lex Bots, however, either they are manual or require low-level use of the PostContent method. We will introduce here a way of automating part of the process using python.
In case you haven’t found it yet, Amazon Lex is a service for building conversational interfaces using voice and text. Powered by the same conversational engine as Alexa, Amazon. AWS Les is instrumental in creating conversational interfaces for mobile applications.
When you are using it as a backend of your mobile application you need to test the Bot implementation carefully to avoid unpleasant surprises to the users.
As we mentioned before, you can test either manually or invoke the low-level PostContent method.
According to official AWS Lex documentation:
“You can test your Amazon Lex bot via the test window on the console. Any business logic implemented in AWS Lambda can be tested via this console as well. All supported browsers allow for testing text with your Amazon Lex bot; voice can be tested from a Chrome browser. “
You might be familiar already with this method already, as it is a quick way of verifying your changes when you alter some utterances or slot definition in some of the Intents conforming your Bot.
While this is very helpful for this immediate verifications is not the right foundation to build your test upon, mainly because you may need to copy and paste the utterances.
So, let’s find a better way.
The PostContent method sends user input (text or speech) to Amazon Lex. Clients use this API to send text and audio requests to Amazon Lex at runtime. Amazon Lex interprets the user input using the machine learning model that it built for the Bot.
This solution is more encouraging than the previous one, as even if the interaction might be complicated and crafting the content of the messages tedious, can indeed be automated.
We can use
bash for this task, but we may also have to rely on some other tool like
jq to parse JSON.
You should create a function like this to send and receive data
cmd=”$aws lex-runtime post-text — bot-name=$bot_name — bot-alias=$bot_alias — user-id=$user_id”
$cmd — input-text “$1”
While entirely possible I think it would become unmanageable really soon and worsening as the complexity of the Bots increase.
Python to the rescue
I won’t talk about the advantages of python here as much has been written on that subject already.
However, we will concentrate on the python features that would make creating this solution easier.
One of these features is its ability to create classes dynamically. We will use this combined with AWS SDK for Python (a.k.a Boto) support and AWS Lex Models to build our solution.
Boto is the AWS SDK for Python, which allows Python developers to write software that makes use of Amazon services like S3 and EC2. Boto provides an easy to use, object-oriented API as well as low-level direct service access.
AWS Lex Models is an interface to execute actions to query, create, update, and delete conversational bots for new and existing client applications. Using it, we can find out the constituents elements of the Bot and use them to feed our dynamic class generation.
Combining these pieces we can create a python module that
- queries and analyzes the specified Bot
- regarding the Intents and Slots found dynamically defines the classes representing those results
- declares the class Conversation as an aggregation of ConversationItem to describe the user interaction
If we take this typical Test Bot example
using the concepts enumerated before, we can define a conversation to test the Bot that goes like this
ConversationItem('I would like to order flowers',
Having explained that, you could take a look at lex-bot-tester implementation on Github, but what’s better than a real-life example to understand how the tests can be simplified fully.
This example uses two of the AWS Lex Blueprint Bots:
I hope this article helps you improve the way you are testing your AWS Lex Bots.
Next, we will cover some improvements to these examples, how to use speech in our tests and other related concepts.
Read next story: Improving conversational AWS Lex Bot tests