Mastering Language Models with AWS Bedrock and LangChain 🦜️Framework for Generative AI Applications Part-1

--

Image Source

In this series of blogs, we’ll learn how to create generative AI applications using AWS Bedrock service and Langchain Framework.

Let’s start with an introduction to Bedrock and Langchain.

What is AWS Bedrock?

AWS Bedrock is a fully managed service offering multiple models from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon via a single API endpoint.

What is Langchain? 🦜️

LangChain is a framework for developing applications powered by language models. it’s designed to simplify the creation of applications using large language models.

With the help of using both, we can create AI-powered chat applications for text generation, summarization, question answering, code creation, etc.

Requirements to develop this.
AWS Account, Computer connected to the internet, and eagerness to learn.

Setup Bedrock access in AWS.

Login into your AWS account, search “bedrock” and select this service.

Currently, bedrock is available only in the regions below,

Before we go ahead with bedrock, we need to enable Model Access in AWS.
in the left sidebar, we can see the model access option. Currently, there are 19 different kinds of text, embeddings and image models available in AWS Bedrock. I’m enabling all the models for now.

Goto Model Access-> Manage model access -> Select all models -> Request Model access.

Llama 2 13B and 70B without chat models are Available on requests, else we got access to all 17 models.

Before using LLMs in the code, you can try these models in the Playground section. You can select from the left sidebar.

Chat -> Select Model -> Anthropic -> Claude 2.1 -> Apply.

You can play with any of the models, in this example, I have selected the Claude V2.1 model.

Now we’ll learn how to integrate AWS Bedrock models with Langchain.

Get Access to Bedrock Models in your Python Code.

We need access keys to get access to AWS Bedrock models into your code with the boto3 library in Python.

Client on right top corner -> Security Credentials -> Access keys -> create access key.

Download the .csv file to save your credentials.

Now we have our Access key and Secret access key.

ACCESS_KEY = "AKIAXJ4RZQQHME5AOGF4"
SECRET_ACCESS_KEY = "wfq5CDFxANLeV8Wza9A8PQIvMMuYQPst9tiEvYC5"

Integrate AWS Bedrock into Python & Langchain

First, we need to install some libraries.

pip install langchain boto3

importing libraries.

import os
import boto3

Defining AWS Access key, Secret access key, and Region.

ACCESS_KEY = "AKIAXJ4RZQQHME5AOGF4"
SECRET_ACCESS_KEY = "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"
AWS_REGION = "us-west-2"

Below we created a class called BedrockLLM which has multiple functions named get_client and get_bedrock_client.

The below functions are connecting AWS using boto3 sdk with ACCESS_KEY, SECRET_ACCESS_KEY, and AWS_REGION variables.

class BedrockLLM:

@staticmethod
def get_bedrock_client():
"""
This function will return the bedrock client.
"""
bedrock_client = boto3.client(
'bedrock',
region_name=AWS_REGION,
aws_access_key_id=ACCESS_KEY,
aws_secret_access_key=SECRET_ACCESS_KEY
)

return bedrock_client


@staticmethod
def get_bedrock_runtime_client():

"""
This function will return the bedrock runtime client.
"""
bedrock_runtime_client = boto3.client(
'bedrock-runtime',
region_name=AWS_REGION,
aws_access_key_id=ACCESS_KEY,
aws_secret_access_key=SECRET_ACCESS_KEY
)

return bedrock_runtime_client

Here we get a bedrock client and list down all foundational models that we enabled earlier.

bedrock_client = BedrockLLM.get_bedrock_client()
FM_list = bedrock_client.list_foundation_models()
print(FM_list)

if you get the long JSON list, voila you have connected your AWS Bedrock successfully with your Python Code.

Now we’ll connect AWS Bedrock into the Langchain framework and ask some questions to LLMs.

importing Langchain and from llms import Bedrock.

import langchain
from langchain.llms.bedrock import Bedrock

We’ll make a new function in the BedrockLLM class. We’ll be using Anthropic’s Claude instant v1 model, we can change to v2, v2.1, etc just by changing the model id.

class BedrockLLM:

@staticmethod
def get_bedrock_llm(
model_id:str = "anthropic.claude-instant-v1",
max_tokens_to_sample:int = 300,
temperature:float = 0.0,
top_k:int = 250,
top_p:int = 1
):
"""
This function will take multiple arguments and return llm

input args: model_id, maximum token to sample, temprature, top k and top p value.

output: return bedrock llm
"""
params = {
"max_tokens_to_sample": max_tokens_to_sample,
"temperature": temperature,
"top_k": top_k,
"top_p": top_p
}

bedrock_llm = Bedrock(
model_id=model_id,
client=BedrockLLM.get_bedrock_runtime_client(),
model_kwargs=params,
)

return bedrock_llm

Top P: A parameter that sets a probability threshold for word selection. It limits choices based on the cumulative probability of potential words, excluding less likely options.

Top K: A parameter specifying the maximum number of high-probability words to consider for the next sequence. It filters out less probable words beyond this limit.

Temperature: A parameter influencing the randomness of word selection in language models. Lower values make the model more deterministic and focused on high-probability words, while higher values introduce more randomness by considering lower-probability options.

llm = BedrockLLM.get_bedrock_llm()
result = llm.invoke(input="Human: Hello, how are you?")
print(result)
# "I'm doing well, thanks for asking!"

Here we invoked Claude’s instant-v1 model and got the output
“ I’m doing well, thanks for asking!”

Let’s ask any other question to our llm.

result = llm.invoke(input="Human: explain me newton's 3rd law")
print(result)
# Newton's Third Law of Motion states that for every action, there is an equal and opposite reaction.

# In more details:

# - It is one of the fundamental laws of physics, originally formulated by Isaac Newton.

# - It explains the interactions between two objects that are in contact or that exert forces on each other.

# - The law states that the force one object exerts on a second object is equal in strength to the force that the second object exerts back on the first.

# - For example, when you push on a wall with your hand, the wall pushes back on your hand with an equal force in the opposite direction.

# - Another example is when a rocket engine fires, it exerts a forward force on the exhaust gases, and the exhaust gases exert an equal and opposite backward force on the rocket, causing it to accelerate forward.

# - The forces always come in pairs - there is no such thing as a single force that acts on an object.

LLMs are very bad at mathematics, when I saw this kind of answer I was also shocked. Great work by Anthropic.

result = llm.invoke(input="Human: what is 2 * 3904")
print(result)
# Okay, here are the step-by-step workings:
# 2 * 3904
# = 2 * (3000 + 900 + 4)
# = 2 * 3000 + 2 * 900 + 2 * 4
# = 6000 + 1800 + 8
# = 7808

# So, 2 * 3904 = 7808.

Here is the complete code below.

import os
import boto3
import langchain
from langchain.llms.bedrock import Bedrock

ACCESS_KEY = "AKIAZRURNAEVKZN75BGV"
SECRET_ACCESS_KEY = "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"
AWS_REGION = "us-west-2"

class BedrockLLM:

@staticmethod
def get_bedrock_client():
"""
This function will return bedrock client.
"""
bedrock_client = boto3.client(
'bedrock',
region_name=AWS_REGION,
aws_access_key_id=ACCESS_KEY,
aws_secret_access_key=SECRET_ACCESS_KEY
)

return bedrock_client


@staticmethod
def get_bedrock_runtime_client():

"""
This function will return bedrock runtime client.
"""
bedrock_runtime_client = boto3.client(
'bedrock-runtime',
region_name=AWS_REGION,
aws_access_key_id=ACCESS_KEY,
aws_secret_access_key=SECRET_ACCESS_KEY
)

return bedrock_runtime_client

@staticmethod
def get_bedrock_llm(
model_id:str = "anthropic.claude-instant-v1",
max_tokens_to_sample:int = 300,
temperature:float = 0.0,
top_k:int = 250,
top_p:int = 1
):
"""
This function will take multiple arguments and give

input args: model_id, maximum token to sample, temprature, top k and top p value.

output: return bedrock llm
"""
params = {
"max_tokens_to_sample": max_tokens_to_sample,
"temperature": temperature,
"top_k": top_k,
"top_p": top_p
}

bedrock_llm = Bedrock(
model_id=model_id,
client=BedrockLLM.get_bedrock_runtime_client(),
model_kwargs=params,
)

return bedrock_llm


bedrock_client = BedrockLLM.get_bedrock_client()
FM_list = bedrock_client.list_foundation_models()
print(FM_list)

llm = BedrockLLM.get_bedrock_llm()

result = llm.invoke(input="Human: Hello, how are you? AI: ")
print(result)

result = llm.invoke(input="Human: explain me newton's 3rd law AI: ")
print(result)

result = llm.invoke(input="Human: what is 2 * 3904")
print(result)

In Part 2 we’ll walk through how we can chat with our documents
(Pdf, CSV, text, etc) with Bedrock and Langchain.

You can follow me on Linkedin 👨‍💻 to be updated on the latest technology.

Thank You.

--

--