AWS Bedrock Agent | Part 2

RAJIB DEB
2 min readDec 26, 2023

--

At the end of this blog, a recording has been attached which explains the content that I am going to document below. It will be good to read the content below and then go through the recording to concretize the understanding of the concepts.

In part 1 of this topic, we saw how we can create an agent through the console and then interact with it. When we create an agent in AWS Bedrock, it goes through four stages

  1. Pre-processing — The purpose of this stage is to analyze or pre-process the user input before it is sent to the language model to get the response based on the user input. A common task of this stage is to check for malicious/harmful or jail break prompts. This stage uses the language model to categorize the user input as malicious or non-malicious content
  2. Orchestration — This is the stage where the language model is used to create the response based on user input. We can add Chain of Thought prompt technique, provide examples to the language model in this stage.
  3. Knowledge base response generation — If we associate a knowledge base with the agent, in this stage a response is generated based on the retrieved knowledge
  4. Post-processing — In this stage, the response from previous stages are formatted and prepared to create the final response for the user. One of the use of this post processing step can be to translate the response from previous stages to a different language. In the recording attached with this blog, the response is translated to Bengali.

All the above stages except the post-processing step are enabled for any agent that we create. The Knowledge base response generation stage is used if we associate a knowledge base with the agent.

Each of these stages also has a base prompt template and a default ouptut parser. The prompt template and the ouptut parser can be modified to fit a specific use case. The prompt template can take in place holder variables which are populated by the service at run time.

A list of these place holder variables can be found in the below link.

https://docs.aws.amazon.com/bedrock/latest/userguide/prompt-placeholders.html

The AWS Bedrock agent service also allows to adjust the inference parameters(e.g; temperature, max tokens, top_p, top_k etc) at each of these stages. We can also deactivate a stage if we want to. If we de-activate the orchestration stage, the user prompt is directly sent to the language model and does not use the base prompt template of the orchestration stage.

If we want to visualize these stages in a diagram, it will look like as below.

The below recording further explains and shows how we can modify the base prompt templates at each stage, the inference parameters and the lambda output parser.

--

--