Exploring Google Vertex AI Conversation — Dialogflow CX Generative AI features Data store & Generators

Paulina Moreno
Google Cloud - Community
7 min readFeb 26, 2024

--

Vertex AI Conversation comprises generative conversational features developed on Dialogflow and Vertex AI. These capabilities empower the utilization of large language models (LLMs) for parsing and understanding content, crafting agent responses, and managing the flow of conversations. This enhancement can minimize the time required for agent design and enhance the overall quality of agents.

With the introduction of Vertex AI Conversation now you can leverage a state machine approach to create sophisticated agents with generative AI capabilities for dynamic conversation design and automation.

In this blog post, we take a close look at the innovative technology of Dialogflow CX’s Generative AI. We will be focus on Data stores and Generators.

Think of Data Stores as a huge library of information. When you ask a question, a virtual assistant (like a super-smart librarian) searches this library for the answer. Dialogflow CX has a special tool called “Data Store” that makes it easy to create conversations focused on information from your library.

We can integrate various sources into a Data store, including:

  • Website domain — Google offers to index your website domain, but you’ll need to go through a verification process to confirm ownership.
  • Unstructured data — Private organizational information (in formats like HTML, PDF, and TXT files)(Note: PPTX and DOCX formats are available in Preview).
  • Structured data — Preview(BigQuery or Cloud Storage)

Keep in mind that there are limitations on file size and the number of files. To learn more about preparing your data, visit: https://cloud.google.com/generative-ai-app-builder/docs/prepare-data.

Dialogflow CX Generators

By using generators, you can summon a large language model (LLM) inline within Dialogflow CX with no external webhook. The Generator can be customized to perform a range of tasks, including summarization, extracting parameters, and manipulating data. Generators are sourced from Vertex AI, delivering dynamic agent behavior and real-time responses based on the prompts you provide.

The GCP public documentation offers a compilation of commonly used Generator recipes. While these recipes have proven to work well, it is important to use them cautiously, as we are using a LLM and its predictability can vary.

To demonstrate text summarization, we’ll be using generators a bit later in the example. Let’s dive in!

Create a Chat Application

In the Google Cloud console, go to the Search and Conversation page.

GCP Search & conversation page
GCP Search & Conversation page

When you visit Search and Conversation for the first time, you can expect to see the following page:

Search & Conversation welcome page

Read and agree to the Terms of Service, then click Continue and activate the API

Next select the “Chat” option

Application Type

Add the name of your company(it doesn’t have to be an actual company) and the name of the agent

Agent Configurations

The next setup will involve data. Our example will make use of the Cloud Storage option. I have put together a PDF that covers everything you should know about Shiba Inu dogs.

Note: You see, I’m a Shiba owner, and these fluffy drama queens have completely stolen my heart.

Data Source
GCS bucket to import

Next we have additional data store configurations

Data store name

Now you can see our new chat application under Search & Conversation | Apps

Select your app and the link will take you to Default Start page on Dialogflow CX

Dialogflow CX -Agent Start Page

On the Start page, navigate to the Data stores section to locate the name of the data store we previously created for our bot.

DialogFlow CX — Data stores configuration

Now we can start asking questions to our agent. Click on Test Agent to open the simulator, leave the parameters (environment, flow, page) as default.

Agent simulator

On the Simulator window, click the clipboard icon:

This will open a pop-up window with the original response (JSON) from the agent

This JSON object has many details about the process steps and the provided response. We’ll focus on how the response is based on factual information.

Original response from Agent

Check the Responses field to view full details of generated responses and locate datastore citations. Referencing our previous question, the response is sourced from this location: actionLink”: “https://storage.cloud.google.com/shiba-xx/The History of the Shiba Inu Breed in Japan.pdf#page=2”


"Responses": [
{
"responseType": "HANDLER_PROMPT",
"source": "VIRTUAL_AGENT",
"payload": {
"richContent": [
[
{
"citations": [
{
"title": "",
"actionLink": "https://storage.cloud.google.com/shiba-xx/The History of the Shiba Inu Breed in Japan.pdf#page=2",
"subtitle": "Origin of the Name While “inu” means dog in Japanese, “shiba” means brushwood. It's possible they are named Shiba Inu after the terrain where they hunted or the color of their coat, which is the color of autumn brushwood."
},
{
"title": "",
"subtitle": "Shiba means brushwood in Japanese, so it's possible that the Shiba Inu was named for the terrain where it hunted. It's also possible that the name came from the Shiba's coat, which is the same color as the autumn brushwood.",
"actionLink": "https://storage.cloud.google.com/shiba-xx/Shiba inu Dog Breed information.pdf#page=2"
}
],
"type": "match_citations"
}
]
]
}
},
{
# ... (JSON continues)

Alright, let’s set up this virtual agent in a Discord channel. After all, a bit of spontaneity never hurts, right?

Create a Discord bot

I followed the steps on integrating Dialogflow integrations | Discord. I made changes to the implementation code and you can find it in my repository here: Dialogflow CX | Discord integration.

Discord ShibaBot

Each response includes a hyperlink to the source, allowing for easy reference to the specific page. The source link’s formatting could be improved, but it did the job. Some answers may be too long and detailed, which is fine for chats but not ideal for Interactive Voice Response (IVR) solutions that need quick and engaging conversations. The question is, how can we make the summary even shorter? This is the ideal time to rely on generators.

Adding a Generator

We will use a Generator to summarize our response even more. Follow the instructions on Generators | Dialogflow CX to define a generator. This is what our Generator for this example looks like:

Generators page

I’m looking for a deterministic answer so the temperature and top-k values are on the low end but they can be adjusted as needed.

We need to integrate our Generator to our flow. We need to add a parameter to our Data store that maps to $request.knowledge.answer[0] which at run time contains the top answer to the user’s question

Data stores — Setting parameters

Remove the default Fulfillment responses

Data stores — Agent responses

This parameter becomes a session parameter, meaning you can reference it throughout the current session

I made a new page named Summarization Response.

Default Start Flow

The next step is to incorporate the generator’s response into the entry fulfillment of our new summarization page

Summarization Response page

We can now refer to $session.params.response as the obtained response has been incorporated into the Dialogflow session.

DialogFlow CX- Fulfillment page

Let’s revisit the question that prompted a detailed response and see if our bot can now summarize it.

Discord bot

Thanks to generators, our responses are now short and sweet — just like magic!

We’ve finished checking out the cool generative AI features of Vertex AI Conversation. If you’re ready to explore creating chatbots and virtual agents that feel more human and less transactional, the time to experiment with Vertex AI Conversation is now. What kind of innovative conversational experiences could you build with this powerful technology?

Thanks a ton for tuning in! I genuinely hope you found something here that sparked a bit of joy or insight. Keep being awesome, and until next time, happy reading!

--

--