Building a Generative AI Web App with AWS Bedrock: A Step-by-Step Guide

Nikita Usatenko
8 min readDec 13, 2023

Introduction

In this tutorial, we’re going to create a simple yet powerful Generative AI web application using AWS Bedrock. Our project? A Web Page Summarizer. This application allows users to input a URL, which it then processes to deliver a concise summary of the web page’s content in bullet points. Let’s dive into how we can build this using a robust tech stack.

Tech Stack Overview

Our project leverages the following technologies:

  • AWS Bedrock
  • AWS Lambda
  • AWS API Gateway
  • ReactJS / NextJS
  • NodeJS

Understanding AWS Bedrock: Features and Pricing

AWS Bedrock is a fully managed service that simplifies building generative AI applications. It offers high-performing foundation models (FMs) from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon. These are accessible via a single API, providing a rich set of features for secure, private, and responsible AI development. With serverless architecture, AWS Bedrock integrates seamlessly with familiar AWS services, eliminating the need for infrastructure management.

Read more: Build Generative AI Applications with Foundation Models — Amazon Bedrock — AWS

Example Pricing for AI21 Labs Models:

Read more: Build Generative AI Applications with Foundation Models — Amazon Bedrock Pricing — AWS

Setting Up AWS Bedrock

Step 1. Log in to your AWS console with an IAM user having the necessary permissions. (Manage AWS Resources — AWS Management Console — AWS).

Step 2. Choose your working region, noting that AWS Bedrock’s availability varies by region and model provider.

Step 3. Navigate to the AWS Bedrock service.

Step 4. Inside AWS Bedrock navigate to “Model Access” and click “Manage model access“.

For this example, we will use the Jurassic model by AI21 Labs.

Step 5. Select the “Jurassic-2 Ultra” model and click “Request model access“. You should be granted access shortly.

You can find more details about this model if you go to the “Providers” section and select AI21 Labs.

Here you can see the supported use cases, example API request and Playground link.

You can use the playground to test the model with different parameters like “Temperature“ and “Top P“. “Temperature” and “Top P” are parameters used in LLMs to control the randomness or creativity in the model’s responses.

  • Temperature: This parameter, typically ranging from 0 to 1, controls the randomness in the model’s predictions.
  • TopP (Top-k Probability): This parameter, also known as “nucleus sampling,” is a way to control the model’s generation process based on the probability distribution of the next word. Instead of taking the top-k most likely words, TopP considers a subset of the vocabulary where the cumulative probability exceeds the value of TopP (which ranges from 0 to 1).

Configuring AWS Lambda

We will use AWS Lambda to create a function that will

  • Accept input
  • Create a prompt for our model
  • Call AWS Bedrock
  • Process and return the response

Step 1. First, let’s navigate to the AWS Lambda service.

Step 2. Here we will create a new function.

Step 3. We will call this function summarize_web_page and we will use the latest available Node version.

Step 4. Under “General configuration” in the “Configuration“ tab let’s adjust our function’s “Timeout“ to be 1 minute to give our model more time to process the request.

Step 5. Then under “Permissions“ we need to give our Lambda function permission to use the AWS Bedrock service.

AWS Lambda Code

After we’ve completed the setup steps let’s get to coding.

Step 1. In the index.mjs file you will see the default handler function. This will be our entry point. Let’s change it as follows:

export const handler = async (event) => {
const body = JSON.parse(event.body);
const result = await summarizeWebPage(body);

return {
statusCode: 200,
body: JSON.stringify(result),
};
};

Please note that we’ve omitted any validation and security steps for the sake of simplicity.

Step 2. The next step would be to implement the summarizeWebPage function. Here we assume that our request body will have the following structure:

{
url: "https://www.sednor.io/"
}

Let’s create the summarizeWebPage function and outline its body:

const summarizeWebPage = async (data) => {
const text = await fetchWebPageText(data.url);
const promptText = buildPromptText(text);
const response = await bedrockQuery(promptText);

return parseResponse(response);
};
  • fetchWebPageText – a function that will call the provided
  • buildPromptText – a function that will build the prompt for our model
  • bedrockQuery – a function that will query the AWS Bedrock service
  • parseResponse – a function that will get a response from the model and format it

Step 3. Let’s look at the fetchWebPageText function:

import { load } from 'cheerio';
// ...
const fetchWebPageText = async (url) => {
try {
const response = await fetch(url);
const data = await response.text();
const $ = load(data);
const body = $('body');
body.find('script, style').remove();
return body.text().trim().split(' ').slice(0, 1000).join(' ');
} catch (e) {
console.error(e);
return "";
}
};

Here we are using the load function from the cheerio library to get the text contents of the web page. We are slicing the text to reduce its length and avoid hitting the token length limit of our model.

Step 4. Next, let’s implement the buildPromptText function:

const buildPromptText = (text) => {
return `"${text}"
Summarize the above web page content in 5 bullet points.
Start each bullet point with a *.
`;
};

Here we are asking the model to give us a summary in 5 bullet points and to start each bullet point with the “*“ symbol so that it is easier to parse the response.

Step 5. Now let’s implement bedrockQuery:

import { BedrockRuntimeClient, InvokeModelCommand } from '@aws-sdk/client-bedrock-runtime';
// ...
const bedrockClient = new BedrockRuntimeClient({region: "us-west-2"});
// ...
const bedrockQuery = async () => {
const modelId = 'ai21.j2-ultra-v1';
const requestBody = {
prompt: promptText,
maxTokens: 1024,
temperature: 0.7,
topP: 1,
stopSequences: [],
countPenalty: {scale: 0},
presencePenalty: {scale: 0},
frequencyPenalty: {scale: 0},
};
try {
const params = {
modelId: modelId,
body: JSON.stringify(requestBody),
accept: 'application/json',
contentType: 'application/json',
};

const command = new InvokeModelCommand(params);
const response = await bedrockClient.send(command);
const buffer = Buffer.from(response.body);
const text = buffer.toString();
const responseData = JSON.parse(text);

return responseData.completions[0].data.text;
} catch (error) {
console.error(`Error: ${error}`);
return "";
}
};

Here we are using BedrockRuntimeClient and InvokeModelCommand from the @aws-sdk/client-bedrock-runtime library. We’ve set the temperature to 0.7 and topP to 1.

Step 6. Finally, let’s implement the parseResponse function:

const parseResponse = (responseText) => {
return responseText
.trim()
.split("*")
.map((item) => item.trim())
.filter((item) => !!item);
};

Here we want to turn a list of bullet points into an array of strings.

Integrating AWS API Gateway

We’ll use the AWS API Gateway to make our Lambda function accessible via an HTTP API.

Step 1. Let’s navigate to the API Gateway service:

Step 2. Here we will create an HTTP API.

Step 3. Let’s set a name for our API and click “Review and Create“.

Step 4. After the API is created we need to create a route.

Step 5. We will create a POST route with /summarize-web-page path.

Step 6. Now we need to connect our route with our Lambda function.

Let’s select “Integration type“, select our Lambda function and click Create.

Step 7. If we go to our API details we will see the “Invoke URL“ that we can use.

Now we should be able to use our API as follows:

POST https://1llvsgdb2g.execute-api.us-east-1.amazonaws.com/summarize-web-page

Body:
{
"url": "https://www.sednor.io/"
}

You can test it in Postman before moving to the next steps.

Developing the User Interface with React

Now let’s build a simple user interface in React to call our API.

Step 1. We can create an application using NextJS create-next-app (Next.js by Vercel — The React Framework).

Step 2. Right in the main page.tsx file we can add the following code:

"use client";

import { Box, Button, TextField } from "@mui/material";

export default function HomePage() {
const [url, setURL] = useState("");
const [summary, setSummary] = useState([]);

const handleSummarize = async () => {
// ...
};

const buildSummaryItem = (item) => {
return (
<Box>{item}</Box>
);
};

return (
<>
<TextField
value={url}
onChange={(e) => setURL(e.target.value)}
label="Webpage to summarize"
/>
<Button onClick={handleSummarize}>
Summarize
</Button>
{
summary.map((item) => buildSummaryItem(item))
}
</>
);
}

Note: here we are borrowing some components from the MUI library (MUI: The React component library you always wanted)

Step 3. Now let’s implement the handleSummarize function:

const SUMMARIZE_WEB_PAGE_URL = "https://1llvsgdb2g.execute-api.us-east-1.amazonaws.com/summarize-web-page"

const handleSummarize = async () => {
const response = await fetch(SUMMARIZE_WEB_PAGE_URL, {
method: "POST",
body: JSON.stringify({ url }),
});
const data = await response.json();
setSummary(data);
};

Live Demo

After adding some styling, your application is ready to go!

--

--

Nikita Usatenko

I'm the co-founder of Sednor, a kickass software development agency. I eat, sleep, and breathe web and mobile development, along with cloud solutions.