Validate AI Ideas Faster: Step-by-Step Guide to Building a No-Code Chatbot Agent with Retrieval Augmented Generation (RAG) on AWS

Gaurav Nukala
The Deep Hub
Published in
6 min readMar 26, 2024

For product managers and startup founders, one of the barriers to validating ideas is the need to build code for the concept. With the advent of no-code tools, it is becoming increasingly simpler to quickly prototype and validate ideas. Especially for applied AI applications, the competitive moat is the private data and how well the application leverages the data to solve jobs-to-be-done in a privacy-preserving fashion, not the underlying tooling and infrastructure.

This article is a continuation of my previous post on the architecture to build an LLM Chatbot with Retrieval Augmented Generation (RAG) on AWS.

For this demo, I have built a financial statement chatbot agent that one can interact with to retrieve information. In the examples I have embedded below, I have asked the agent the following questions about Google’s 2023 10-K:

  • Per the 10-K, what did the California jury rule on Epic games
  • Show me all the references to Generative AI in the 10-K
  • How much revenue did Google do in 2023 compared to 2022

The agent retrieved the relevant chunks from the 10-K and produced an answer. For example, for the question on the Epic Games ruling, the agent produced the following answer: ‘“According to search result 4, in December 2023, a California jury delivered a verdict in Epic Games v. Google, finding that Google violated antitrust laws related to Google Play’s billing practices.”

This method is extensible to building any other use case. For example, an agent for customer service logs.

Before we jump into the instructions, below is the architecture we will leverage to build the agent.

Generative AI Architecture with Retrieval Augmented Generation (RAG) on AWS

Below are the steps:

  1. Pre-requisite
  2. Uploading dataset to S3
  3. Creating Knowledge base
  4. Tokenizing the dataset
  5. Creating a Lamdba function
  6. Launching a chatbot agent

Now, let’s look at how we can build a chatbot agent from scratch.

Pre-requisite

If you are not an existing AWS user, first create an account.

AWS Sign-up Page (aws.amazon.com)

If you have created an IAM user account, use the Identity and Access Management (IAM) service to configure the below permissions for the role.

AWS IAM Permissions

Uploading dataset to S3

Documents on S3
  1. On the Amazon S3 console, choose Buckets in the navigation pane.
  2. Choose Create bucket.
  3. Name the bucket knowledgebase-<your-awsaccount-number>.
  4. Leave all other bucket settings as default and choose Create.
  5. Navigate to the knowledgebase-<your-awsaccount-number> bucket.
  6. Choose Create folder and name it dataset.
  7. Leave all other folder settings as default and choose Create.
  8. Navigate back to the bucket home and choose Create folder to create a new folder and name it lambdalayer.
  9. Leave all other settings as default and choose Create.
  10. Navigate to the dataset folder.
  11. Upload knowledgebase-lambdalayer.zip file available under the /lambda/layer folder.. You will use this Lambda layer code later to create the Lambda function. Here is the link to the knowledgebase-lambdalayer.zip file.
AWS Knowledge Base

Creating a Knowledge Base & Tokenizing the dataset

In this step, head to Amazon Bedrock service and create a knowledge base using the dataset that you uploaded to the S3 bucket.

AWS Bedrock
  1. On the Amazon Bedrock console, under Orchestration in the navigation pane, choose Knowledge base.
AWS KnowledgeBase

2. Choose Create knowledge base.

3. In the Knowledge Base Details section, enter a name and optional description.

4. In the IAM permissions section, select Create and use a new service role and enter a name for the role.

5. Add tags as needed.

6. Choose Next.

7. Leave the data source name as the default name.

8. For S3 URI, choose Browse S3 to choose the S3 bucket knowledgebase-<your-account-number>/dataset/.You need to point to the bucket and dataset folder you created in the previous steps.

9. In the Advanced settings section, leave the default values (if you want, you can change the default chunking strategy and specify the chunk size and overlay in percentage).

10. Choose Next.

11. For the Embeddings model, select Titan Embedding G1 — Text.

12. For the Vector database, you can either select Quick Create a new vector store or Choose a vector store you have created. Note that, to use the vector store of your choice, you need to have a vector store preconfigured to use. We currently support four vector engine types: the vector engine for Amazon OpenSearch Serverless, Amazon Aurora, Pinecone, and Redis Enterprise Cloud. For this post, we select Quick Create a new vector store, which by default creates a new OpenSearch Serverless vector store in your account.

13. Choose Next.

14. On the Review and Create page, review all the information, or choose Previous to modify any options.

15. Choose Create knowledge base.

16. When the knowledge base status is in the Ready state, note down the knowledge base ID. You will use it in the next steps to configure the Lambda function.

17. Now that the knowledge base is ready, we need to sync our Amazon shareholder's letter data to it. In the Data Source section of the knowledge base details page, choose Sync to trigger the data ingestion process from the S3 bucket to the knowledge base.

Syncing KnowledgeBase

Creating a Lamdba function

In this step, head to the CloudFormation service to create a Lambda function.

AWS CloudFormation
AWS Lambda Function
  1. On the AWS CloudFormation service home page, choose Create stack to create a new stack.
  2. Select Template is ready for Prepare template.
  3. Select Upload the template file for Template source.
  4. Choose Choose File, navigate to the GitHub repo you cloned earlier, and choose the .yaml file from this location.
  5. Choose Next.
  6. For the Stack name, enter a name.
  7. In the Parameters section, enter the knowledge base ID and S3 bucket name you noted down earlier.
  8. Choose Next.
  9. Leave all default options as is, choose Next, and choose Submit.
  10. Verify that the CloudFormation template ran successfully, and there are no errors.

Launching a chatbot agent

In this step, head to Amazon Bedrock and choose the ‘Knowledge Base’ option under ‘Orchestration.’ In the right corner, there is a chatbot interface. Make sure to toggle the ‘Generate responses’ option.

AWS Test Chatbot

Conclusion

This post provides a step-by-step guide to building your chatbot agent on AWS using your dataset with Retrieval Augmented Generation (RAG) on AWS. I have built a financial statement chatbot, but the same methodology is extensible to other use cases.

References:

  1. https://aws.amazon.com/
  2. https://aws.amazon.com/blogs/machine-learning/build-a-contextual-chatbot-application-using-knowledge-bases-for-amazon-bedrock/
  3. https://aws.amazon.com/blogs/aws/preview-enable-foundation-models-to-complete-tasks-with-agents-for-amazon-bedrock/

Send me a note at grnukala@gmail.com if you have any issues.

--

--

Gaurav Nukala
The Deep Hub

Product executive; Built products at Apple and 3 unicorns; Follow me to hear my thoughts on product, healthcare, AI/ML, startups