Getting started with Amazon Lex

Nathalia Trazzi
10 min readMar 9, 2024

--

My first contact with Cloud was with AWS — Still in College — A few years ago, and since then, AWS products have gained new versions and features.

AWS Lex

According to AWS(https://docs.aws.amazon.com/lexv2/latest/dg/what-is.html), Amazon Lex is an service for building conversational interfaces for applications using voice and text. Amazon Lex provides the deep functionality and flexibility of natural language understanding (NLU) and automatic speech recognition (ASR) so you can build highly engaging user experiences with lifelike, conversational interactions, and create new categories of products.

Amazon Lex V2 enables any developer to build conversational bots quickly. You can then add the conversational interfaces to bots on mobile devices, web applications, and chat platforms (for example, Facebook Messenger).

Optionally, if you would like to learn about other products available in the market for creating virtual assistants, agents, or chatbots — depending on how you like to call them — but from another provider, check out my Watsonx Assistant series

Watsonx Assistant:https://medium.com/@nathalia.trazzi/list/watsonx-assistant-series-english-97cbc18f5b76

Watsonx Assistat Classic: https://medium.com/@nathalia.trazzi/list/watsonx-assistant-classic-series-english-4c0b1e654142

The thing is, just like Watson Assistant has two versions (v1 or classic and v2), AWS Lex also has two versions. In this article, I will cover v2.

You must have an AWS account to use Lex. To create an account, it’s very simple — just visit: https://aws.amazon.com/

On the AWS homepage, you can use the search bar to look for Amazon Lex, as shown in the image below:

-> Or access the left-side menu, navigate to All services, locate Lex in the Artificial Intelligence category, and access the product

When you enter Lex, you should see a page asking you to choose the dataset location from where you will use the product. Choose the option that is closest to you for latency reasons, and then your page should look like this:

I have already created a bot for ordering pizzas, but to create one, simply click on ‘Create bot,’ as shown in the image below.

Creation Method

You can choose a creation method, from describing the type of bot you want using generative AI — however, as the interface itself warns, you must have an Amazon Bedrock configured to use it — to creating a bot with transcripts.”

Bot Configuration

I selected Create a blank bot

And then I chose a name and a description.

IAM permissions

You have options such as creating one with a basic role or using an existing one.

Children’s online Privacy Protection Act (COPPA)

Enable COPPA (Children’s Online Privacy Protection Act) or simply choose not to enable it

Idle session timeout

Another interesting point is that you can change the session timeout, as well as tweak advanced product options, including adding tags.

After configuring everything to your preference, simply click on Next.

Now your page should look like the following:

-> Rascal_Order is my chatbot name’s, yours should have the chosen name by you.

Now that you’ve created a virtual agent, there are a series of things you can do on its home page

Bot Details

You can edit information such as the name and description of your chatbot.

Add languages

On this page, you can add a new language to your virtual agent by either duplicating what you already have for another language or adding a language from scratch.

You can start with an example for customization, transcriptions, or use the descriptive bot builder with generative AI — again, you need to have Amazon Bedrock configured for this. If you do, just select the provider and the chosen LLM

Create Versions and aliases for deployment

In this section, you can manage versions and channels for your virtual agent

Analyze and improve your bot

To use metrics to track operations and see how your virtual agent is performing with user interactions. This is important for the continuous improvement of your product

Tags

For managing tags on your virtual agent.

Resource-based polity

To edit policies for your resource.

In the left side menu, we have shortcuts for Lex, many of which have just been described. Others will be mentioned a bit later.

Now, click on Languages on left side menu to set the language for your virtual agent/chatbot.

I selected the option Start with an Example.

But there are other available options such as:

  • Copying your existing chatbot to translate it into another language.
  • Adding a language from scratch.
  • Adding transcripts that can generate intents automatically.
  • Initiating by describing the type of chatbot you want for generative artificial intelligence to do this for you. —You need to have AWS Bedrock configured.

Example languages

Here are examples of pre-configured intents that Lex offers for you

Language details

Choose the desired language and add a description (optionally).

Voice

Choose a voice option for the voice interaction with your virtual agent/chatbot. You can listen to a sample of how the voice sounds.

Confidence score threshold

In this section, you can choose the minimum confidence score value. The minimum value is 0.0, and the maximum is 1.0.

This function serves to respond to user requests with a confidence level defined by you.

Finally, click on Add.

Now that you have a language selected, click on View Intents.

What are Intents?

  1. Something that is intended; an aim or purpose. synonym: intention.
  2. The state of mind necessary for an act to constitute a crime.

When we talk about intents in Virtual Assistants/Chatbots or however you prefer to call them, we are referring to user intentions.

If a user types or tells your virtual assistant that they want to order a pizza, their intention is to order a pizza. Intents are triggers, and these triggers will make your chatbot respond to your user in the way you want.

The actions that the chatbot can take to respond to the user can include text responses, sending information from a back end integration, sending documents, images, and etc.

As mentioned earlier, Lex comes with some pre-built intents like BookCar, BookHotel, and FallbackIntent. The BookCar and BookHotel intents will not be mentioned or used in this article.

Instead, they will be deleted. You can do this by clicking on the empty circle icon and then clicking on Delete.

Confirm your action in the pop-up window that appears in your interface

Now, click on Add intent to add a custom intent

Conversation flow

By expanding this section, you should see an example of how your conversation flow will look. This will be mentioned later on

Intent Details

In this section, you can name your intent and provide a description for it.

The example I’m creating here is for placing a pizza order, and the name I’m using is OrderAPizza

Contexts — optional

A context is a state variable that can be associated with an intent when you define an agent.

Sample Utterances

This is where you input the phrases you expect the user to say to trigger the intent and make the virtual agent respond.

In Watsonx Assistant, IBM instructs us to provide at least 5 examples; you can provide fewer, but the ideal is at least 5. The more examples you have, the better. Thanks to Natural Language Processing, these intent examples help your chatbot become trained to identify the variations of phrases that, in the end, mean the same thing.

In Lex, I did not find any instructions on its interface, but I believe that the more content you have, the better.

In a few agent builders, the provider asks not to provide many examples.

I gave Lex 6 examples to start.

Initial response

In this section, we will define the type of response the user will receive when typing phrases like the ones you defined in Sample Utterances.

In my example, I defined the following phrase below:

Alright. I can help you with that. What kind of pizza would like to order?

In the following sections, we have:

Slots

Slots are like boxes that hold information, or if you are familiar with programming languages, slots are akin to variables.

According to AWS, When you use a default value, you specify a source for a slot value to be filled for new intents when no slot is provided by the user’s input. This source can be previous dialog, request or session attributes, or a fixed value that you set at build-time.

You can use the following as the source for your default values.

  • Previous dialog (contexts) — #context-name.parameter-name
  • Session attributes — [attribute-name]
  • Request attributes — <attribute-name>
  • Fixed value — Any value that doesn’t match the previous

When the intent is recognized, the slot named “reservation-start-date” has its value set to one of the following.

If the “book-car-fulfilled” context is active, the value of the “startDate” parameter is used as the default value.

If the “book-car-fulfilled” context is not active, or if the “startDate” parameter is not set, the value of the “reservationStartDate” session attribute is used as the default value.

If neither of the first two default values are used, then the slot doesn’t have a default value and Amazon Lex will elicit a value as usual.

If a default value is used for the slot, the slot is not elicited even if it is required.

Confirmation

You can provide confirmation options for the user in this section.

Fulfillment

You can configure an integration of an application with Lambda.

Closing response

You choose the final response for this conversation dialogue flow

Further down, you have the option to use a code hook where you can add a hook invocation to your virtual agent. It’s essentially adding a special function that allows you to link or hook into some request in your virtual agent.

You can click on Visual Builder to construct your agent using visual resources.

> It’s important that you click on the orange button on the right side of the screen, next to Save Intent, to save your progress.

The interface is quite intuitive, and creating a chatbot this way seems quite enjoyable. However, this topic will not be discussed in this article.

To return to the traditional editor, click on Editor.

> If you made any changes, it’s important to save before exiting.

Resuming the creation of intents…

Now that you’ve defined a name, a description, some examples to trigger the intent, and an interaction response that your chatbot will provide, click on Save Intent.

Now, click on Build, the button also located on the right side of the screen but above the Save Intent button.

It may take a few minutes for this progress to be ready. When everything is set, click on Test, and then your page should look like this:

Start with one of the samples you entered to initiate this dialogue

And there you have it! You’ve built a first intent that will initiate a conversation flow with your virtual agent.

Now let’s go back to the conversation flow, navigate to the top of the page, under Conversation Flow, click on it to expand this section.

In the conversation flow, you can visualize the skeleton of a conversation. Note that the initial request — called a sample utterance — is one of the samples to initiate a conversation, as defined earlier. The acknowledge intent — initial response — is what your agent will say.

This article covered

  • What Lex is
  • Some available features for use
  • Creating an intent and what we can do with them

To prevent this article from becoming too long , I willcontinue in the next one, where I’ll delve a bit deeper into slots.

See you there…

--

--

Nathalia Trazzi

AI Engineer. Proficient in Watsonx.ai, Assistant and Discovery. Full stack software engineer and Chatbot developer. Fine art photographer.