Getting started with Watsonx Assistant VI

Nathalia Trazzi
7 min readMay 24, 2024

--

It’s been a while since I wrote the last Getting started with Watsonx Assistant.

I was planning to show how to do RAG (Retrieval Augmented Generation) with Watsonx Assistant, but that will be for another time. IBM has recently introduced and is continuing to introduce new features for the Assistant, so today I will highlight some of these points.

That doesn’t invalidate the previous articles; the way to build actions, steps and other things remains the same.

To continue with this article, it’s not necessary for you to be an expert in Watsonx Assistant, but I recommend having at least a intermediate understanding.

My Watsonx Assistant Series: https://medium.com/@nathalia.trazzi/list/watsonx-assistant-series-english-97cbc18f5b76

My Watsonx Assistant Classic Series: https://medium.com/@nathalia.trazzi/list/watsonx-assistant-classic-series-english-4c0b1e654142

Getting started with Watsonx Assistant new features.

- Customization of Assistant’s UI Design.

The customization of Assistant’s UI has changed. It is possible to choose between a dark and light theme for the user experience.

Watsonx Assistant offers this way to customize the UI so if you don’t want to spend much time building an attractive interface for your users, this is perfect. It’s simple to get your Assistant up and running on your site without much effort needed to implement the front end afterward.

Using the Watsonx Assistant interface and implementing it on your website is very simple. Access my web page integration github repository to learn more: https://github.com/miucciaknows/WxAssistant-WebPageIntegration-Typescript

- Generative AI features are available to everyone in Watsonx Assistant

As of May 3, IBM has made Generative AI features available to the public in Watsonx Assistant, utilizing Granite (IBM’s own Foundation Model).

To activate this feature, follow the steps below:

  1. On the homepage of Watsonx Assistant, or the current page you are on, acess the left side menu, and click on Actions.

Now, on the current page, click on the gear icon to navigate to the global settings.

And your page should look like the following image:

Click on the Generative AItab.

On Generative AIpage, note that there’s a switch button which is Off.

Enable this feature as shown in the following image, and then click Save for the change to take effect.

And then, return to the Actions page.

But first, let’s take a brief pause to understand how intelligent information gathering works.

How does generative artificial intelligence work in Watsonx Assistant?

This new feature implemented in Watsonx Assistant brings a new experience for the assistant’s end users.

Gather accurate information: Your end user may not have enough patience to provide their name, ID and email, one at a time. They may provide everything at once, and Watsonx Assistant will use Generative Artificial Intelligence to gather only the relevant information from the user.

Complete multiple steps at once: Instead of having to fulfill multiple steps of an action or multiple actions, with just one input from the user, you can resolve everything at once.

Return to the Actions page to proceed with the next steps and click on New Actions +

Also note that the Assistant’s experience has changed slightly here.

Create an Action Page — Quick start with templates

Everything remains the same; You can build actions from scratch or start with pre-built templates.

Some of these templates already existed previously and many helped in integrating other products into Watsonx Assistant, such as Neuralseek (https://neuralseek.com/).

Templates for building a predefined Assistant with a specific theme were already common in the classic version of Watsonx Assistant when using dialogs. (https://medium.com/@nathalia.trazzi/list/watsonx-assistant-classic-series-english-4c0b1e654142) But in this new experience, good examples have been included to help the chatbot curator work faster and assist non-technical beginners who want to create virtual assistants.

There are many templates to explore, including an integrated extension for Google to conduct a search.

If these templates weren’t enough for you, you can suggest that IBM create one by clicking on “Suggest a template” at the bottom of the page.

In this article, none of these templates will be tested. This is just a new feature worth mentioning.

Click on the “x” icon on the page and return to the action creation page, then choose Start from scratch option

Let’s continue by creating a scratch.action.

The focus of this article is not to teach how to create actions or discuss what they are, nor how to use custom extensions or create a context variable.

To understand these concepts, please refer back to the earlier articles in this series if you haven’t already: https://medium.com/@nathalia.trazzi/list/watsonx-assistant-series-english-97cbc18f5b76

When creating your action, initially, add at least 5 examples that are phrases that will trigger your Assistant to communicate with your user.

You can follow the examples below or simply create them with the context you desire.

Next, as the first step of the action, in Assistant saysbox place a message where the Assistant will ask for the user’s name.

You can use any example you prefer.

Then, in the following section, choose the type of response the user will provide as free text.

Now, click on the icon indicated by the second arrow in the image above; you should provide examples for the Assistant.

Customer says: These are examples of what the user will type.

Variable Value: From the phrase “my name is Anna” the important value is Anna.

Provide 3 examples to help the model identify what is important. After all, the model is not a mind reader and needs examples to bring you the best response since everything is based on statistics and predictions.

Click on Applyto save it.

Next, create another step for this action.

For this example, another step was created asking for the user’s email, allowing them to send free text, and then some examples were provided.

Add examples of how the user can start that sentence and what from that sentence is relevant to extract.

Click on Apply

Click on Preview button and the result should look like the following image:

Watsonx Assistant captured valuable user information, such as the name and email.

The action ended because only the two steps shown so far continue.

But let’s continue…

Now, create a third step to display the data that was captured.

For this third step, it’s important to add conditions for this step to occur. In this example, two “rules” were defined for it to happen.

  • The step only occurs if there is data in the first step, where the user will enter their name.
  • The second step only occurs if the email is also defined.

Just below in the Assistant says box, a message was defined where the user will receive the information they entered.

  • > Note that to indicate a variable (both as an action, assistant variables, and integration variables), just insert a $ and then the name of the variable, as follows:
I see… I have your name and email right here: 
name: $1.Hello! To continue I need to ask: What's your name?
email: $2. What's your e-mail? Do you confirm this information?

Upon clicking the Previewbutton again, you should have the following result, as shown in the image below:

If the user had followed step by step guided by the assistant, the same would have happened:

But the experience is different for the user. If they have already used your virtual Assistant and are familiar with the steps, they already know they have to provide their name and email. This also makes life easier and is responsible for curating the chatbot because it’s one less concern. Regardless of how the user provides the data that will be requested, Watsonx Assistant will always bring what really matters.

That’s all for now. See you in the next one…

--

--

Nathalia Trazzi

AI Engineer. Proficient in Watsonx.ai, Assistant and Discovery. Full stack software engineer and Chatbot developer. Fine art photographer.