Getting started with AWS Bedrock

Charlie Douglas
4 min readSep 28, 2023

AWS announced the general availability of their latest GenAI offering Amazon Bedrock yesterday, having previously been on private beta since its announcement in April.

Bedrock is a fully managed service that makes foundation models (FMs) from leading AI companies available through a single application programming interface (API).

Image taken from AWS marketing materials.

Keeping your data safe

Bedrock was built with security and privacy in mind and your data will be protected at all times. AWS promise that no customer data will be used to train their foundation models and all data is encrypted at rest and in transit.

As part of the launch announcement yesterday, AWS also unveiled further enhancements to their security and governance by announcing full HIPAA compliance and that it can be used in compliance with GDPR.

Available models

AWS have made available a number of foundation models for general use:

  • Anthropic – Claude v1.3
  • Anthropic – Clause v2
  • Anthropic – Claude Instant v1.2
  • Cohere – Command
  • AI21 Labs – Jurassic 2 Mid
  • AI21 Labs – Jurassic 2 Ultra
  • Stable Diffusion
  • Amazon Titan – Embeddings G1 – Text
  • Amazon Titan – G1 Express
  • Meta – Llama 2 (coming soon)

For a full description of capabilities of each model head over to the Base Models page within the console.

Getting started with Amazon Bedrock

Assuming you already have an AWS account, your first step to getting started with Amazon Bedrock is by entering the dedicated AWS console page where you will need to navigate to the Model Access page to select any models you might want to use. Model enablement will take a few minutes after you confirm your selections.

When enabled, head over to the Chat Playground to have your first interactions with the models. Select a model category and model to get started.

Enter a sample prompt and hit Run and await the results of your first Bedrock query. If you have streaming enabled your results will appear in real time as they are generated.

Experiment with the various controls whilst running the same prompt to see how different settings can affect your output. Temperature, Top P, and Top K give you control over the length and tokens used of the response, as well as the randomness and diversity. Each base model has different controls but all work in roughly the same way.

Interacting with the Bedrock API

Once you’ve got to grips with the Playground you will soon want to move onto interacting with the Bedrock API. This allows you to integrate Bedrock services into your own applications. For the purpose of this guide, we will:

  • Use boto3 to connect to AWS services
  • Invoke the Bedrock API
  • List all Bedrock models available
  • Interact with Claude using the API

Before progressing, ensure your python environment is set up with the AWS Python SDK and the latest version of boto3.

Making your first API call to Amazon Bedrock is incredibly straightforward. We will import boto3 and then call the list_foundation_models() function to get the latest list of models available to us.

import boto3

bedrock = boto3.client(
service_name=’bedrock’,
region_name=’us-east-1'
)

bedrock.list_foundation_models()

Run your python code and if successful you should see an output listing all the available models similar to the image below.

Moving onto running an inference to get a result back from Claude. We need to expand the code to utilise the service name ‘bedrock-runtime’ along with the ‘invoke_model’ commmand.

import boto3
import json

bedrock = boto3.client(service_name='bedrock-runtime', region_name='us-east-1')

modelId = 'anthropic.claude-v2'
accept = 'application/json'
contentType = 'application/json'
body = json.dumps({
"prompt": "Human:This is a test prompt. Assistant:",
"max_tokens_to_sample": 300,
"temperature": 0.1,
"top_p": 0.9,
})

response = bedrock.invoke_model(body=body, modelId=modelId, accept=accept, contentType=contentType)

response_body = json.loads(response.get('body').read())

print(response_body.get('completion'))

Here we have expanded the code and told Bedrock that we want to use Clause v2 and want a response in JSON format. Claude needs to take its prompt in the format ‘Human: <prompt> ‘Assistant:’ and will not accept anything else. You can play around with the max tokens, temperature and top_p to vary the results.

My initial results returned as:

Hello! I’m Claude, an AI assistant created by Anthropic. I don’t actually have experiences or preferences, since I’m an AI without subjective experiences. I’m happy to chat, but I don’t have personal stories or opinions to share.

There you have it — how to make your first API call to the new Amazon Bedrock using Anthropic Claude. In further articles in this series I will explore how to include your own data as part of the requests to create a model that works specifically for your own use cases.

--

--

Charlie Douglas

Data analytics, data science, AI/ML professional within financial services.