Created by the author free images and figma.

AI-Powered Customer Support: The Ultimate Multi-Agent System

Using Azure OpenAI and Rust to Build Intelligent and Scalable AI Systems

David Minkovski
Published in
13 min readJul 19, 2024

--

Motivation

For the past few months, I’ve had the pleasure of taking the front seat to some really fascinating and exciting AI projects, thanks to my amazing customers and colleagues at Microsoft.
During these sessions, I noticed a common challenge:

How do I use ChatGPT while keeping control?

So, I thought to myself, why not write a little article to help?

Let’s break it down and create a super simple “multi-agent” system.
This system will handle everything from the initial customer support request to evaluating the intent, escalating issues, and creating possible action items.
Let’s get to it!

What are we building?

A command line interface that will ask us to set the scene and then the customer query. After that all the agents we define will be working to react, evaluate the sentiment and propose action items.

Support case in a restaurant
Support case in a hotel

Building Our Support Case Multi-Agent Demo

To create our support case multi-agent demo, we’ll use a simple sequential approach where each agent performs a specific task using Azure OpenAI.

Okay wait what is an agent? And why multi-agent?

An agent in this context, is essentially a piece of your program that runs some business logic and interacts with ChatGPT.

This method makes it easier to scale as more prompts and AI interactions become necessary, and it also keeps us in control of the flow at each step. While Yes, we could let the AI decide which agents or functions to call, but we’ll save that for another article and demo. That is a bit more complicated.
For now, let’s stick to the basics.

But why not just one prompt that handles it all?

Glad you asked! Well because you would have no easy way of controlling the steps AND can you imagine asking someone to do 3 things at once? How is that going to work? Good luck keeping track of each step, validating and controlling the flow…

The Architecture (Simplified)

Multi-Agent Architecture with Azure OpenAI

The architecture consists of the following components:

  • Agent Orchestrator (Top): This agent won’t be talking to the AI. Instead, it will initialize the other agents and ensure they complete their tasks one after the other.
  • Agent Customer Query (1): This agent will take the customer’s input and generate an immediate response using ChatGPT. Afterall first things first and that is — The customer needs to feel heard!
  • Agent Sentiment (2): This agent will analyze the sentiment of the customer’s input and classify it as “positive” or “negative.”
  • Agent Escalation (3): This agent will determine what action items make sense or escalate the issue to a human if needed.

By following this approach, we ensure that each part of the system works smoothly and efficiently, making our support case both effective and easy to manage. And yes…it’s extensible and you are in control!

Why did I choose Rust?

Well, let’s take a short trip down memory lane.
Being my first one, I’ve spent countless hours with Java, which still feels like the dependable toolkit I use for a solid, reliable, enterprise oriented but definetely not always the most exciting projects.
I’ve also had my fair share of adventures with TypeScript especially being a React Developer. Then I dabbled in Go (Golang), and let me tell you, Go’s simplicity and speed were truly like a breath of fresh air. Super clean, efficient, and really to the point. Yes I also worked a bit with Python, which I still am trying to be friends with…

And then came Rust. Have you every played any Zelda or Tomb Raider?
This truly is like a puzzle in a room that makes you pause and think,
“Okay, this is really different. No idea.”
What I truly appreciate about Rust is, that it forces you to think more deeply about memory safety, concurrency, and performance.
All the things I was not really keen about paying attention to.
It’s like a mental workout that either makes you feel like Hercules or like a nobody that never understood the language in the first place.

Why would you make your life as a developer more complicated?

I believe it’s important to keep one’s brain engaged. Especially nowadays with AI taking a lot of boring tasks away, we tend to get “lazy”.
Don’t get me wrong — I think the best engineers are the lazy ones, but by trying to minimize input and maximizing output.
Rust challenges your usual programming habits and teaches you that.
It’s like having a principal engineer that insists on doing things the “right way” — so you don’t have to worry about your code again.
So yes, I believe Rust is making you a more thoughtful programmer.

Be sure to check it out here: Rust Programming Language (rust-lang.org)

And now back to the project!

How to Track Progress and Transfer Data?

Keeping track of progress and transferring data between agents is crucial for our multi-agent system.

Why is that necessary though?

Well each agent runs as its own instance. Therefore it has only access to the information you give it. If we were not able to share any information between the agents…well then that team would utterly fail!

So to achieve this, we’ll use an object called SupportCase. This object serves as the central repository for all the information our agents need to operate efficiently. Think of it as the ultimate shared notepad that everyone can read from and write to.

Why not use a Database?

For this proof of concept (PoC), I decided to avoid the overhead of setting up a database. A shared object is simpler and perfectly suited for our needs. It allows us to save and transfer all necessary data without the complexity of database management. But yes, you can definetely extend this system by adding a database once it gets more complicated.

The Support Case Struct

Here’s a peek at the SupportCase struct that makes all of this information sharing possible:

#[derive(Debug, Clone)]
pub struct SupportCase {
pub case_id: Uuid,
pub support_context: String,
pub customer_query: String,
pub support_response: Option<String>,
pub sentiment: Option<String>,
pub should_escalate: bool,
pub escalated: bool,
pub needs_upper_management_attention: bool,
pub created_at: DateTime<Local>,
pub updated_at: DateTime<Local>,
pub trace: Vec<Message>,
pub supported_actions: Vec<String>
}

How it works

  1. Initialization: When a new support case is created, a SupportCase object is initialized with all relevant details.
  2. Progress Tracking: Each agent updates the SupportCase with new information as it processes the case. For example, the Customer Query Agent will add its response and update the trace of messages.
  3. Data Transfer: As the SupportCase object is shared between agents, each one can read the latest state and make informed decisions based on previous actions and data.

This approach ensures that all agents are on the same page, literally and figuratively, and can work together smoothly to provide top-notch customer support.

Meet the Orchestrator: The Brain of Our System

Let’s introduce you to the brains behind our multi-agent operation: the CoordinatorAgent.
This orchestrator manages the agents, the SupportCase object, and ensures everything runs smoothly. Think of it as the conductor of an orchestra, making sure each instrument (agent) plays its part to create a symphony.

What Does the CoordinatorAgent Do?

  1. Initialization: It initializes the SupportCase with the given context and query.
  2. Agent Management: It manages a collection of agents, each tasked with a specific role.
  3. Execution: It coordinates the execution of each agent, ensuring they operate in sequence and contribute to resolving the support case.
pub struct CoordinatorAgent {
support_case: SupportCase,
agents: Vec<Box<dyn AgentFunctionTrait>>,
}

impl CoordinatorAgent {
pub fn new(context: String, query: String) -> Self {
let support_case = SupportCase::new(context, query);

Self {
support_case,
agents: vec![],
}
}
fn add_agent(&mut self, agent: Box<dyn AgentFunctionTrait>) {
self.agents.push(agent);
}
fn create_agents(&mut self) {
self.add_agent(Box::new(AgentCustomerQuery::new()));
self.add_agent(Box::new(AgentSentiment::new()));
self.add_agent(Box::new(AgentEscalation::new()));
}
pub async fn handle_support_request(&mut self) {
self.create_agents();

for agent in &mut self.agents {
agent
.execute(&mut self.support_case)
.await
.expect("Should have executed agent");
}
}
}

How It All Comes Together

  1. Initialization: The CoordinatorAgent is created with the necessary context and customer query.
  2. Agent Creation: The create_agents method sets up all the agents needed for processing the support request.
  3. Execution: The handle_support_request method runs each agent sequentially, ensuring that they contribute their part to resolving the case.

Common Agent Properties and Traits in Rust

To get started with our multi-agent system, let’s define the core components that each agent will need:
Common Agent Properties and Common Agent Traits.

Common Agent Properties

We’ll start by defining the properties that every agent will share.
This includes the agent’s state, role, objective, and memory.

#[derive(Debug, PartialEq)]
pub enum AgentState {
Waiting,
Working,
Error,
Finished,
}

#[derive(Debug)]
pub struct CommonAgent {
pub role: String,
pub objective: String,
pub state: AgentState,
pub memory: Vec<Message>,
}

AgentState: This enum represents the different stages an agent can be in. Each state can trigger a specific reaction within the agent.

  • Waiting: The agent is idle, waiting to be activated.
  • Working: The agent is currently processing a task.
  • Error: The agent has encountered an issue.
  • Finished: The agent has completed its task.

CommonAgent: This struct holds the core properties of an agent.

  • role: A string that defines the agent's role. Optional but helps in understanding who is doing what.
  • objective: A string that describes the agent's objective. This helps in understanding what the agent is meant to accomplish. But it’s optional really.
  • state: The current state of the agent, defined by the AgentState enum.
  • memory: A vector of Message structs to keep track of interactions and history for multiple iterations and interactions with the AI.

Implementing Common Agent Traits

Now that we’ve defined the common properties for our agents, we’ll move on to defining the common traits that each agent will implement. This will ensure that all agents share some basic functionality while allowing for specialized behavior in specific agents.

Common Agent Traits

We’ll define a trait Agent that will include methods that all agents should have, such as initializing, executing tasks, and handling state transitions.

pub trait CommonTrait {
fn new(role: String, objective: String) -> Self;
fn update_state(&mut self, new_state: AgentState); /// Optional
fn get_state(&self) -> &AgentState;
fn get_role(&self) -> &String;
fn get_objective(&self) -> &String;
fn get_memory(&self) -> &Vec<Message>;
}

#[async_trait]
pub trait AgentFunctionTrait: Debug {
// Execute agent logic
async fn execute(&mut self, support_case: &mut SupportCase) -> Result<(), Box<dyn Error>>;

// Coordinator can get common information from agent
fn get_common_from_agent(&self) -> &CommonAgent;
}

Meet Your First Agent: The Customer Query Agent

Let me introduce you to our first creation: the Customer Query Agent.

How Does It Work?

Our Customer Query Agent is like having a super friendly concierge at the reception desk in your hotel who never needs a coffee break.
When a customer throws a question his way — like “My steak is still raw.” or “My hotel room is rather dirty…” — the Customer Query Agent jumps into action.
Here’s a how it does its magic:

  1. Initializing: The agent starts in a state of calm, waiting for a query to handle.
  2. Processing: When a query comes in, the agent wakes up and gets to work, crafting a response using our beloved ChatGPT.
  3. Error Handling: If something goes wrong, the agent logs the error.
  4. Completion: Once the task is done, the agent is finished and probably goes on drinking virtual coffee.
#[derive(Debug)]
pub struct AgentCustomerQuery {
common: CommonAgent,
}

impl AgentCustomerQuery {
pub fn new() -> Self {
let common = CommonAgent::new(
"Customer Support".to_string(),
PROMPT.to_string(),
);
Self { common }
}

async fn handle_initial_query(&mut self, support_case: &mut SupportCase) {
self.common.update_state(AgentState::Working);
let query: &str = &support_case.customer_query;
let msg: Message =
prepare_message(&self.common.objective, &support_case.support_context, query);
support_case.trace.push(msg.clone());
let result: Result<String, Box<dyn Error + Send>> = ai_request(msg).await;
support_case.updated();
if let Ok(response) = result {
support_case.support_response = Some(response.clone());
support_case.trace.push(Message {
role: "assistant".to_string(),
content: response,
});
self.common.update_state(AgentState::Finished);
} else {
support_case.support_response = Some(result.unwrap_err().to_string());
self.common.update_state(AgentState::Error);
};
}
}

#[async_trait]
impl AgentFunctionTrait for AgentCustomerQuery {
async fn execute(&mut self, support_case: &mut SupportCase) -> Result<(), Box<dyn Error>> {
while self.common.state != AgentState::Finished {
match self.common.state {
AgentState::Waiting => {
CLIPrint::Info.out(&self.common.role, format!("Handling initial query: {}", &support_case.customer_query).as_str());
self.handle_initial_query(support_case).await;
}
AgentState::Error => {
CLIPrint::Error.out(
&self.common.role,
format!(
"There was an error: {:?}",
Some(&support_case.support_response)
)
.as_str(),
);
self.common.state = AgentState::Finished;
}
_ => {
self.common.state = AgentState::Finished;
}
}
}

CLIPrint::Default.out(
&self.common.role,
format!("{}", support_case.support_response.as_ref().unwrap()).as_str(),
);

Ok(())
}

fn get_common_from_agent(&self) -> &CommonAgent {
&self.common
}
}

The Prompts: The Secret Sauce of AI Magic

When it comes to creating an AI that actually works, the magic lies in the prompts. Think of prompts as the instructions you give to a genie in a bottle. You have to be clear, concise, and avoid leaving any room for misunderstanding or misinterpretation.
In ChatGPT, the better your prompts, the better your results.

Why are prompts so important?

Well, imagine telling a cook to make “food” versus asking for a “spaghetti aglio e olio with three cloves of garlic and four spoons of oil.”
The latter is specific and leaves much less room for any misinterpretation.
The same goes for AI prompts — they need to be specific, clear, and tailored to what you are trying to do.

Best Practices for Writing Prompts:

  1. Be Specific: Clear instructions lead to good answers. Make your prompts as specific as possible, but leave some room for creativity.
  2. Avoid Questions: Ensure the AI sticks to the task without deviating or asking for more input. Unless you want it to!
  3. Limit Scope: Keep the task manageable and straightforward. Don’t overwhelm the AI with too many instructions at once.
  4. Iterate and Test test test: Prompts will not be perfect at first.
    Test them, see how ChatGPT responds, and improve.

Here are a few examples of the prompts I use for these agents:

Query Agent Prompt

Our Query Agent is like your perfect receptionist — efficient, polite, and straight to the point. However, no questions because that’s not the job.

"You are a receptionist or assistant. 
You handle incoming customer queries and provide immediate responses before continuing to work with your customer support team.
IMPORTANT: You do not ask any follow-up questions. No questions at all."

Sentiment Agent Prompt

Next, meet the Sentiment Agent, our in-house psychologist if you will, who’s always there to find out how our customer means things. This prompt makes sure our agent keeps things simple and 100% binary — just the sentiment, nothing more!

"You are a psychologist helping out Customer Support. 
You handle incoming customer queries, analyze them, and categorize their sentiment as either 'Negative' or 'Positive'.
IMPORTANT: You do not ask any follow-up questions. No questions at all.
VERY IMPORTANT: Your answer is always either 'Positive' or 'Negative'. You provide absolutely NO additional info."

Escalation Agent Prompt

Finally, we have the Escalation Agent, the hero who jumps in when things get negative and complicated. This prompt tells the agent to provide actionable steps without overcomplicating things, ensuring it outputs clean, usable JSON data.

const ACTIONS_PROMPT: &str = r#"
You are in charge of customer escalations within Customer Support.
You handle incoming customer queries and sentiments and provide resolving actions.
You will respond with a JSON format of an array of ACTIONS to call in different customer support scenarios based on context.
IMPORTANT: You do not ask any follow-up questions. No questions at all.
SUPER IMPORTANT: Remove any '```json' or weird formats. It needs to be a VALID JSON array only.
EXAMPLE 1:
Input: 5 Stars Hotel
Output: ["Change room", "Provide discount for bar and snacks", "Call mechanic", "Call room service"]
EXAMPLE 2:
Input: Small, medium-sized company
Output: ["Setup meeting with HR", "Setup meeting with Sales", "Refund item", "Offer discount"]
"#;

Why This Matters

Good prompts are the backbone of effective AI interactions. They help keep the AI focused, efficient, and mostly accurate.
So, next time you’re crafting a prompt, remember:

Be specific, clear, and keep it as simple as possible.

Leveraging Azure OpenAI: Easier Than You Think

One of the best parts about building this AI-powered customer support system is how effortlessly it integrates with Azure OpenAI.
With just a few tweaks and some basic setup, you can have your AI agents up and running in no time. Here’s why Azure OpenAI is a fantastic choice and how you can get started.

Why Azure OpenAI?

  1. Integration: Azure OpenAI is really easy to plug into your existing infrastructure. You just need the key and endpoint and you are good to go.
  2. Scalability: Whether you’re handling a handful of queries or thousands and millions, Azure scales quickly with your needs.
  3. Reliability: Come on, it’s azure. You get high availability and performance out of the box.
  4. Security: No worries — Azure provides top-notch security features. So trust me, your data remains safe no matter what.

Want to learn more?

Fundamentals of Azure OpenAI Service — Training | Microsoft Learn

Connecting to Azure OpenAI

Setting up Azure OpenAI is straightforward. All the necessary REST calls are already handled in the code of the repository.
All you need to do is configure your environment variables in a .env file. Just copy the .env.sample and you are good to go:

  1. First, clone the repository to your local machine.
git clone https://github.com/dminkovski/customer-support-assistant-rust.git
cd customer-support-assistant-rust

2. Create the.env file in the root of your project and add the necessary environment variables. Here’s an example:

AZURE_OPEN_AI_ENDPOINT=https://XXXXXXXXX.openai.azure.com/
AZURE_OPEN_AI_KEY=XXXXXXXXXXXXX
AZURE_OPEN_AI_MODEL_DEPLOYMENT_NAME=gpt-4o
AZURE_OPEN_AI_API_VERSION=2024-02-15-preview

3. Make sure you have all the required dependencies installed:

cargo build

4. With everything set up, you can run your application and watch your AI agents in action.

cargo run

Your Turn — Try it out!

Building an AI-powered customer support system might sound like a complicated task, but with the right tools and a structured approach, it’s not just achievable — it’s truly fun and exciting!
By leveraging Azure OpenAI and Rust, I’ve created a simple but scalable, and good first iteration of a solution that turns customer support into a potentially smooth and mostly automated experience.

From writing good prompts to setting up your environment with just a few tweaks, I think we covered the essentials to get you started.
Each agent, from handling initial queries to analyzing sentiment and managing escalations, works together and the orchestrator can be easily adapted to include more agents.

So dive in, explore the code, and start building your own intelligent support system today. I am super excited to see what kind of agents you come up with!

Here are some ideas for you to play with:

  1. Use the agents to build out the automated creation of a backend service that implements the proposed action-items everytime dynamically
  2. Add an emailer to send an email once human intervention is needed
  3. Connect the escalation agent to a ticketing system
  4. Add another function to the sentiment analyzer to react additionally based on the sentiment
  5. Create an agent that asks for additional information in form of files for verification of identity

Thank you for joining me on this adventure and reading this article.
Happy coding!

--

--