AI Cloud Lab

Exploring AI, Cloud, LLM Frameworks , MLOps and Open Source Tools into hands-on and easy-to-follow tutorials.

Featured

Building Multi-Agent AI App with Google’s A2A (Agent2Agent) Protocol, ADK, and MCP

--

Introduction

In my previous article ,we explored how Google’s open-source Agent Development Kit(ADK) can act as Model Context Protocol (MCP) Client with minimal code changes — enabling seamless integration with external tools and APIs in a model-friendly way .

At Google Cloud Next ’25, along with ADK, Google also announced the Agent-2-Agent (A2A) protocol — a powerful open standard that allows AI agents to communicate, collaborate, and coordinate actions across organizational boundaries. A2A enables secure and structured agent communication on top of diverse enterprise systems, regardless of the underlying agent framework..

Quick Recap

1. MCP (Model Context Protocol)

In my previous article , we explored the core concepts behind MCP, how it works against traditional API integration methods, its key benefits along with full working code demonstration.

2. ADK(Agent Development Kit)

We also covered in earlier article about Google’s open source ADK toolkit , along with different Agent Categories , available tool types along with complete hands-on showcasing ADK ,MCP as Client and Gemini LLM in action.

In this article, we will focus on understanding Agent2Agent (A2A) Protocol core concepts and, then deep dive in code implementation .

What is A2A protocol ?

A2A (Agent-to-Agent) protocol is an open protocol developed by Google for communication between agents across organizational or technological boundaries.

https://github.com/google/A2A

Using A2A, agents accomplish tasks for end-users without sharing memory, thoughts, or tools. Instead the agents exchange context, status, instructions, and data in their native modalities.

  • Simple: Reuse existing standards
  • Enterprise Ready: Auth, Security, Privacy, Tracing, Monitoring
  • Async First: (Very) Long running-tasks and human-in-the-loop
  • Modality Agnostic: text, audio/video, forms, iframe, etc.
  • Opaque Execution: Agents do not have to share thoughts, plans, or tools.
https://github.com/google/A2A

A2A vs MCP — Key Difference

  • MCP makes enterprise APIs tool-agnostic and LLM-friendly
  • A2A allows distributed agents (built with ADK, LangChain, CrewAI, etc.) to talk to each other seamlessly.

A2A is API for agents and MCP is API for tools

A2A Protocol — Core Concept

The A2A (Agent-to-Agent) protocol is built around several fundamental concepts that enable seamless agent interoperability. In this article , we will be using Multi Agent travel planner application to provide better understanding of Core Concepts

1. Task-based Communication

Every interaction between agents is treated as a task — a clear unit of work with a defined start and end. This makes communication structured and trackable. Example

Task-based Communication (Request)

{
"jsonrpc": "2.0",
"method": "tasks/send",
"params": {
"taskId": "20250615123456",
"message": {
"role": "user",
"parts": [
{
"text": "Find flights from New York to Miami on 2025-06-15"
}
]
}
},
"id": "12345"
}

Task-based Communication (Response)

{
"jsonrpc": "2.0",
"id": "12345",
"result": {
"taskId": "20250615123456",
"state": "completed",
"messages": [
{
"role": "user",
"parts": [
{
"text": "Find flights from New York to Miami on 2025-06-15"
}
]
},
{
"role": "agent",
"parts": [
{
"text": "I found the following flights from New York to Miami on June 15, 2025:\n\n1. Delta Airlines DL1234: Departs JFK 08:00, Arrives MIA 11:00, $320\n2. American Airlines AA5678: Departs LGA 10:30, Arrives MIA 13:30, $290\n3. JetBlue B9101: Departs JFK 14:45, Arrives MIA 17:45, $275"
}
]
}
],
"artifacts": []
}
}

2. Agent Discovery

Agents can automatically discover what other agents do by reading their agent.json file from a standard location (/.well-known/agent.json). No manual setup needed. Below one is Agent Card Example

  • Purpose: Publicly accessible discovery endpoint
  • Location: Standardized web-accessible path as per A2A protocol
  • Usage: External discovery by other agents/clients
  • Role: Enables automatic agent discovery
{
"name": "Travel Itinerary Planner",
"displayName": "Travel Itinerary Planner",
"description": "An agent that coordinates flight and hotel information to create comprehensive travel itineraries",
"version": "1.0.0",
"contact": "code.aicloudlab@gmail.com",
"endpointUrl": "http://localhost:8005",
"authentication": {
"type": "none"
},
"capabilities": [
"streaming"
],
"skills": [
{
"name": "createItinerary",
"description": "Create a comprehensive travel itinerary including flights and accommodations",
"inputs": [
{
"name": "origin",
"type": "string",
"description": "Origin city or airport code"
},
{
"name": "destination",
"type": "string",
"description": "Destination city or area"
},
{
"name": "departureDate",
"type": "string",
"description": "Departure date in YYYY-MM-DD format"
},
{
"name": "returnDate",
"type": "string",
"description": "Return date in YYYY-MM-DD format (optional)"
},
{
"name": "travelers",
"type": "integer",
"description": "Number of travelers"
},
{
"name": "preferences",
"type": "object",
"description": "Additional preferences like budget, hotel amenities, etc."
}
],
"outputs": [
{
"name": "itinerary",
"type": "object",
"description": "Complete travel itinerary with flights, hotels, and schedule"
}
]
}
]
}

3. Framework-agnostic Interoperability

A2A works across different agent frameworks — like ADK, CrewAI, LangChain — so agents built with different tools can still work together.

4. Multi-modal Messaging

A2A supports various content types through the Parts system, allowing agents to exchange text, structured data, and files within a unified message format.

5. Standardized Message Structure

A2A uses a clean JSON-RPC style for sending and receiving messages, making implementation consistent and easy to parse.

6. Skills and Capabilities

Agents publish what they can do (“skills”) — along with the inputs they need and outputs they provide — so others know how to work with them. Example

// Skill declaration in agent card
"skills": [
{
"name": "createItinerary",
"description": "Creates a travel itinerary",
"inputs": [
{"name": "origin", "type": "string", "description": "Origin city or airport"},
{"name": "destination", "type": "string", "description": "Destination city or airport"},
{"name": "departureDate", "type": "string", "description": "Date of departure (YYYY-MM-DD)"},
{"name": "returnDate", "type": "string", "description": "Date of return (YYYY-MM-DD)"}
]
}
]

// Skill invocation in a message
{
"role": "user",
"parts": [
{
"text": "Create an itinerary for my trip from New York to Miami"
},
{
"type": "data",
"data": {
"skill": "createItinerary",
"parameters": {
"origin": "New York",
"destination": "Miami",
"departureDate": "2025-06-15",
"returnDate": "2025-06-20"
}
}
}
]
}

7. Task Lifecycle

Each task goes through well-defined stages: submitted → working → completed (or failed/canceled). We can always track state a task is in.

8. Real-Time Updates (Streaming)

Long-running tasks can stream updates using Server-Sent Events (SSE), so agents can receive progress in real time.

9. Push Notifications

Agents can proactively notify others about task updates using webhooks, with support for secure communication (JWT, OAuth, etc.).

10. Structured Forms

Agents can request or submit structured forms using DataPart, which makes handling inputs like JSON or configs easy.

Architecture

We will be using same architecture here Travel Planner extending with A2A + MCP protocol .

Below demo is just for illustration purpose to understand A2A protocol between multiple agent

Image by Author

Above architecture is using modular multi-agent AI system, where each agent is independently deployable and communicates using Google’s A2A (Agent-to-Agent) protocol.

Core Components in above Architecture

  1. User Interface Layer — Sends HTTP requests to frontend server
  2. Agent Layer — Coordination between Host Agent , Agent 1 and Agent 2
  3. Protocol Layer — A2A protocol communication between agents
  4. External Data Layer — using MCP access external APIs

Agents Role:

  1. Itinerary Planner Agent — Acts as the central orchestrator — Host Agent, coordinating interactions between the user and the specialized agents.
  2. Flight Search Agent — A dedicated agent responsible for fetching flight options based on user input
  3. Hotel Search Agent — A dedicated agent responsible for fetching hotel accommodations matching user preferences

MCP Implementation in this Project :

Flight Search MCP Server

  1. Connection: Flight Search (Agent 1) connects to MCP Flight Server
  2. Functionality: Connects to flight booking APIs and databases

Hotel Search MCP Server

  1. Connection: Hotel Search (Agent 2) connects to MCP Hotel Server
  2. Functionality:Connects to hotel reservation systems and aggregators

Agent Communication Flow

  1. User submits a travel query via Streamlit UI
  2. Travel Planner parses the query to extract key information
  3. Travel Planner requests flight information from Flight Search Agent
  4. Flight Search Agent returns available flights by invoking MCP Server
  5. Travel Planner extracts destination details
  6. Travel Planner requests hotel information from Hotel Search Agent
  7. Hotel Search Agent returns accommodation options
  8. Travel Planner synthesizes all data into a comprehensive itinerary

Implementation

Let us dive into building this multi agent with ADK + MCP + Gemini AI by breaking down into key implementation steps

Pre-Requisites

  1. Python 3.11+ installed

2. Google Gemini Generative AI access via API key

3. A valid SerpAPI key

4.A valid OpenAI GPT key

Project Folder Structure

├── common
│ ├── __init__.py
│ ├── client
│ │ ├── __init__.py
│ │ ├── card_resolver.py
│ │ └── client.py
│ ├── server
│ │ ├── __init__.py
│ │ ├── server.py
│ │ ├── task_manager.py
│ │ └── utils.py
│ ├── types.py
│ └── utils
│ ├── in_memory_cache.py
│ └── push_notification_auth.py
├── flight_search_app
│ ├── a2a_agent_card.json
│ ├── agent.py
│ ├── main.py
│ ├── static
│ │ └── .well-known
│ │ └── agent.json
│ └── streamlit_ui.py
├── hotel_search_app
│ ├── README.md
│ ├── a2a_agent_card.json
│ ├── langchain_agent.py
│ ├── langchain_server.py
│ ├── langchain_streamlit.py
│ ├── static
│ │ └── .well-known
│ │ └── agent.json
│ └── streamlit_ui.py
└── itinerary_planner
├── __init__.py
├── a2a
│ ├── __init__.py
│ └── a2a_client.py
├── a2a_agent_card.json
├── event_log.py
├── itinerary_agent.py
├── itinerary_server.py
├── run_all.py
├── static
│ └── .well-known
│ └── agent.json
└── streamlit_ui.py

Step 1 : Setup virtual environment

Install the dependancies

#Setup virtual env
python -n venv .venv

#Activate venv
source .venv/bin/activate

#Install dependancies
pip install fastapi uvicorn streamlit httpx python-dotenv pydantic
pip install google-generativeai google-adk langchain langchain-openai

Step 2: Install MCP Servers package

mcp hotel server — https://pypi.org/project/mcp-hotel-search/

mcp flight server — https://pypi.org/project/mcp-hotel-search/

#Install mcp search hotel
pip install mcp-hotel-search

#Install mcp flight search
pip install mcp-flight-search

Step 3: Set Env. variable for Gemini , OpenAI , SerpAI

Set Environment variables from above pre-requisites

 GOOGLE_API_KEY=your_google_api_key
OPENAI_API_KEY=your_openai_api_key
SERP_API_KEY=your_serp_api_key

Step 4: Setting up Flight Search (Agent ) using ADK as MCP Client using Gemini 2.0 Flash

Below setup is same as previous article usingADK but addition of A2A protocol and using re-useable modules from https://github.com/google/A2A/tree/main/samples/python/common

├── common/                          # Shared A2A protocol components
│ ├── __init__.py
│ ├── client/ # Client implementations
│ │ ├── __init__.py
│ │ └── client.py # Base A2A client
│ ├── server/ # Server implementations
│ │ ├── __init__.py
│ │ ├── server.py # A2A Server implementation
│ │ └── task_manager.py # Task management utilities
│ └── types.py # Shared type definitions for A2A
Image by Author
├── flight_search_app/               # Flight Search Agent (Agent 1)
│ ├── __init__.py
│ ├── a2a_agent_card.json # Agent capabilities declaration
│ ├── agent.py # ADK agent implementation
│ ├── main.py # ADK Server entry point Gemini LLM
│ └── static/ # Static files
│ └── .well-known/ # Agent discovery directory
│ └── agent.json # Standardized agent discovery file

4.1 ADK agent implementation as MCP Client to fetch tools from MCP Server

from google.adk.agents.llm_agent import LlmAgent
from google.adk.tools.mcp_tool.mcp_toolset import MCPToolset, StdioServerParameters

..
..
# Fetch tools from MCP Server
server_params = StdioServerParameters(
command="mcp-flight-search",
args=["--connection_type", "stdio"],
env={"SERP_API_KEY": serp_api_key},)


tools, exit_stack = await MCPToolset.from_server(
connection_params=server_params)
..
..

4.2 ADK Server Entry point definition using common A2A server components and types and google ADK runners , sessions and Agent

from google.adk.runners import Runner 
from google.adk.sessions import InMemorySessionService
from google.adk.agents import Agent
from .agent import get_agent_async

# Import common A2A server components and types
from common.server.server import A2AServer
from common.server.task_manager import InMemoryTaskManager
from common.types import (
AgentCard,
SendTaskRequest,
SendTaskResponse,
Task,
TaskStatus,
Message,
TextPart,
TaskState,
)


# --- Custom Task Manager for Flight Search ---
class FlightAgentTaskManager(InMemoryTaskManager):
"""Task manager specific to the ADK Flight Search agent."""
def __init__(self, agent: Agent, runner: Runner, session_service: InMemorySessionService):
super().__init__()
self.agent = agent
self.runner = runner
self.session_service = session_service
logger.info("FlightAgentTaskManager initialized.")

...
...

4.3 Create A2A Server instance using Agent Card

# --- Main Execution Block --- 
async def run_server():
"""Initializes services and starts the A2AServer."""
logger.info("Starting Flight Search A2A Server initialization...")

session_service = None
exit_stack = None
try:
session_service = InMemorySessionService()
agent, exit_stack = await get_agent_async()
runner = Runner(
app_name='flight_search_a2a_app',
agent=agent,
session_service=session_service,
)

# Create the specific task manager
task_manager = FlightAgentTaskManager(
agent=agent,
runner=runner,
session_service=session_service
)

# Define Agent Card
port = int(os.getenv("PORT", "8000"))
host = os.getenv("HOST", "localhost")
listen_host = "0.0.0.0"

agent_card = AgentCard(
name="Flight Search Agent (A2A)",
description="Provides flight information based on user queries.",
url=f"http://{host}:{port}/",
version="1.0.0",
defaultInputModes=["text"],
defaultOutputModes=["text"],
capabilities={"streaming": False},
skills=[
{
"id": "search_flights",
"name": "Search Flights",
"description": "Searches for flights based on origin, destination, and date.",
"tags": ["flights", "travel"],
"examples": ["Find flights from JFK to LAX tomorrow"]
}
]
)

# Create the A2AServer instance
a2a_server = A2AServer(
agent_card=agent_card,
task_manager=task_manager,
host=listen_host,
port=port
)
# Configure Uvicorn programmatically
config = uvicorn.Config(
app=a2a_server.app, # Pass the Starlette app from A2AServer
host=listen_host,
port=port,
log_level="info"
)
server = uvicorn.Server(config)
...
...

4.4 Let us start flight search app

Step 5: Configure Hotel Search Agent using LangChain as MCP Client and using OpenAI(GPT-4o) as LLM

├── hotel_search_app/                # Hotel Search Agent (Agent 2)
│ ├── __init__.py
│ ├── a2a_agent_card.json # Agent capabilities declaration
│ ├── langchain_agent.py # LangChain agent implementation
│ ├── langchain_server.py # Server entry point
│ └── static/ # Static files
│ └── .well-known/ # Agent discovery directory
│ └── agent.json # Standardized agent discovery file
Image by Author

5.1. LangChain Agent implementation as MCP Client with OpenAI LLM

from langchain_openai import ChatOpenAI
from langchain.agents import AgentExecutor, create_openai_functions_agent
from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_mcp_adapters.client import MultiServerMCPClient

# MCP client configuration
MCP_CONFIG = {
"hotel_search": {
"command": "mcp-hotel-search",
"args": ["--connection_type", "stdio"],
"transport": "stdio",
"env": {"SERP_API_KEY": os.getenv("SERP_API_KEY")},
}
}

class HotelSearchAgent:
"""Hotel search agent using LangChain MCP adapters."""

def __init__(self):
self.llm = ChatOpenAI(model="gpt-4o", temperature=0)

def _create_prompt(self):
"""Create a prompt template with our custom system message."""
system_message = """You are a helpful hotel search assistant.
"""

return ChatPromptTemplate.from_messages([
("system", system_message),
("human", "{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
])
..
..
async def process_query(self, query):
...

# Create MCP client for this query
async with MultiServerMCPClient(MCP_CONFIG) as client:
# Get tools from this client instance
tools = client.get_tools()

# Create a prompt
prompt = self._create_prompt()

# Create an agent with these tools
agent = create_openai_functions_agent(
llm=self.llm,
tools=tools,
prompt=prompt
)

# Create an executor with these tools
executor = AgentExecutor(
agent=agent,
tools=tools,
verbose=True,
handle_parsing_errors=True,
)

5.2 Create A2AServer Instance using common A2A server components and types

        
# Use the underlying agent directly
from hotel_search_app.langchain_agent import get_agent, HotelSearchAgent

# Import common A2A server components and types
from common.server.server import A2AServer
from common.server.task_manager import InMemoryTaskManager
from common.types import (
AgentCard,
SendTaskRequest,
SendTaskResponse,
Task,
TaskStatus,
Message,
TextPart,
TaskState
)
..
..

class HotelAgentTaskManager(InMemoryTaskManager):
"""Task manager specific to the Hotel Search agent."""
def __init__(self, agent: HotelSearchAgent):
super().__init__()
self.agent = agent # The HotelSearchAgent instance
logger.info("HotelAgentTaskManager initialized.")

async def on_send_task(self, request: SendTaskRequest) -> SendTaskResponse:
"""Handles the tasks/send request by calling the agent's process_query."""
task_params = request.params
task_id = task_params.id
user_message_text = None

logger.info(f"HotelAgentTaskManager handling task {task_id}")


# --- Main Execution Block ---
async def run_server():
"""Initializes services and starts the A2AServer for hotels."""
logger.info("Starting Hotel Search A2A Server initialization...")

agent_instance: Optional[HotelSearchAgent] = None
try:
agent_instance = await get_agent()
if not agent_instance:
raise RuntimeError("Failed to initialize HotelSearchAgent")

# Create the specific task manager
task_manager = HotelAgentTaskManager(agent=agent_instance)

# Define Agent Card
port = int(os.getenv("PORT", "8003")) # Default port 8003
host = os.getenv("HOST", "localhost")
listen_host = "0.0.0.0"

agent_card = AgentCard(
name="Hotel Search Agent (A2A)",
description="Provides hotel information based on location, dates, and guests.",
url=f"http://{host}:{port}/",
version="1.0.0",
defaultInputModes=["text"],
defaultOutputModes=["text"],
capabilities={"streaming": False},
skills=[
{
"id": "search_hotels",
"name": "Search Hotels",
"description": "Searches for hotels based on location, check-in/out dates, and number of guests.",
"tags": ["hotels", "travel", "accommodation"],
"examples": ["Find hotels in London from July 1st to July 5th for 2 adults"]
}
]
)

# Create the A2AServer instance WITHOUT endpoint parameter
a2a_server = A2AServer(
agent_card=agent_card,
task_manager=task_manager,
host=listen_host,
port=port
)

config = uvicorn.Config(
app=a2a_server.app, # Pass the Starlette app from A2AServer
host=listen_host,
port=port,
log_level="info"
)

5.3 Let us start Hotel search app (Langchain ) as entry point to invoke MCP server

Step 6 : Implement Host Agent as Orchestrator between agents using A2A protocol

Itinerary planner is central component of above travel planning using A2A protocol between flight and hotel services.

├── itinerary_planner/               # Travel Planner Host Agent (Agent 3)
│ ├── __init__.py
│ ├── a2a/ # A2A client implementations
│ │ ├── __init__.py
│ │ └── a2a_client.py # Clients for flight and hotel agents
│ ├── a2a_agent_card.json # Agent capabilities declaration
│ ├── event_log.py # Event logging utilities
│ ├── itinerary_agent.py # Main planner implementation
│ ├── itinerary_server.py # FastAPI server
│ ├── run_all.py # Script to run all components
│ ├── static/ # Static files
│ │ └── .well-known/ # Agent discovery directory
│ │ └── agent.json # Standardized agent discovery file
│ └── streamlit_ui.py # Main user interface

6.1 A2A Protocol implementation using Flight and Hotel API URL

  • Contains client code to communicate with other services
  • Implements the Agent-to-Agent protocol
  • Has modules for calling flight and hotel search services
# Base URLs for the A2A compliant agent APIs
FLIGHT_SEARCH_API_URL = os.getenv("FLIGHT_SEARCH_API_URL", "http://localhost:8000")
HOTEL_SEARCH_API_URL = os.getenv("HOTEL_SEARCH_API_URL", "http://localhost:8003")

class A2AClientBase:
"""Base client for communicating with A2A-compliant agents via the root endpoint."""


async def send_a2a_task(self, user_message: str, task_id: Optional[str] = None, agent_type: str = "generic") -> Dict[str, Any]:
...
....
# Construct the JSON-RPC payload with the A2A method and corrected params structure
payload = {
"jsonrpc": "2.0",
"method": "tasks/send",
"params": {
"id": task_id,
"taskId": task_id,
"message": {
"role": "user",
"parts": [
{"type": "text", "text": user_message}
]
}
},
"id": task_id
}

6.2 Itinerary Planner Agent Card

JSON metadata file that describes an agent’s capabilities, endpoints, authentication requirements, and skills. Used for service discovery in the A2A protocol

{
"name": "Travel Itinerary Planner",
"displayName": "Travel Itinerary Planner",
"description": "An agent that coordinates flight and hotel information to create comprehensive travel itineraries",
"version": "1.0.0",
"contact": "code.aicloudlab@gmail.com",
"endpointUrl": "http://localhost:8005",
"authentication": {
"type": "none"
},
"capabilities": [
"streaming"
],
"skills": [
{
"name": "createItinerary",
"description": "Create a comprehensive travel itinerary including flights and accommodations",
"inputs": [
{
"name": "origin",
"type": "string",
"description": "Origin city or airport code"
},
{
"name": "destination",
"type": "string",
"description": "Destination city or area"
},
{
"name": "departureDate",
"type": "string",
"description": "Departure date in YYYY-MM-DD format"
},
{
"name": "returnDate",
"type": "string",
"description": "Return date in YYYY-MM-DD format (optional)"
},
{
"name": "travelers",
"type": "integer",
"description": "Number of travelers"
},
{
"name": "preferences",
"type": "object",
"description": "Additional preferences like budget, hotel amenities, etc."
}
],
"outputs": [
{
"name": "itinerary",
"type": "object",
"description": "Complete travel itinerary with flights, hotels, and schedule"
}
]
}
]
}

6.3 Itinerary agent using Google-GenAI SDK

To keep it simple for the demo using GenAI SDK ( can use ADK , CrewAI or other frameworks )

Itinerary agent act as central host agent of the system which orchestrates communication with flight and hotel search services which uses language models to parse natural language requests

import google.generativeai as genai # Use direct SDK
..
..
from itinerary_planner.a2a.a2a_client import FlightSearchClient, HotelSearchClient

# Configure the Google Generative AI SDK
genai.configure(api_key=api_key)

class ItineraryPlanner:
"""A planner that coordinates between flight and hotel search agents to create itineraries using the google.generativeai SDK."""

def __init__(self):
"""Initialize the itinerary planner."""
logger.info("Initializing Itinerary Planner with google.generativeai SDK")
self.flight_client = FlightSearchClient()
self.hotel_client = HotelSearchClient()

# Create the Gemini model instance using the SDK
self.model = genai.GenerativeModel(
model_name="gemini-2.0-flash",
)
..
..

6.4 Itinerary Server — FastAPI server that exposes endpoints for the itinerary planner that handles incoming HTTP requests and routes requests to the itinerary agent

from fastapi import FastAPI, HTTPException, Request

from itinerary_planner.itinerary_agent import ItineraryPlanner

@app.post("/v1/tasks/send")
async def send_task(request: TaskRequest):
"""Handle A2A tasks/send requests."""
global planner

if not planner:
raise HTTPException(status_code=503, detail="Planner not initialized")

try:
task_id = request.taskId

# Extract the message from the user
user_message = None
for part in request.message.get("parts", []):
if "text" in part:
user_message = part["text"]
break

if not user_message:
raise HTTPException(status_code=400, detail="No text message found in request")

# Generate an itinerary based on the query
itinerary = await planner.create_itinerary(user_message)

# Create the A2A response
response = {
"task": {
"taskId": task_id,
"state": "completed",
"messages": [
{
"role": "user",
"parts": [{"text": user_message}]
},
{
"role": "agent",
"parts": [{"text": itinerary}]
}
],
"artifacts": []
}
}

return response

6.5 Streamlit_ui — User interface built with Streamlit and provides forms for travel planning and displays results in a user-friendly format

...
...
# API endpoint
API_URL = "http://localhost:8005/v1/tasks/send"

def generate_itinerary(query: str):
"""Send a query to the itinerary planner API."""
try:
task_id = "task-" + datetime.now().strftime("%Y%m%d%H%M%S")

payload = {
"taskId": task_id,
"message": {
"role": "user",
"parts": [
{
"text": query
}
]
}
}

# Log the user query and the request to the event log
log_user_query(query)
log_itinerary_request(payload)

response = requests.post(
API_URL,
json=payload,
headers={"Content-Type": "application/json"}
)
response.raise_for_status()

result = response.json()

# Extract the agent's response message
agent_message = None
for message in result.get("task", {}).get("messages", []):
if message.get("role") == "agent":
for part in message.get("parts", []):
if "text" in part:
agent_message = part["text"]
break
if agent_message:
break
..
..
...

Step 7: Final Demo

In each terminal , Let us start server agents as shown below and as we can see from below demo

# Start Flight Search Agent - 1 Port 8000 
python -m flight_search_app.main

# Start Hotel Search Agent - 2 Port 8003
python -m hotel_search_app.langchain_server

# Start Itinerary Host Agent - Port 8005
python -m itinerary_planner.itinerary_server

# Start frontend UI - Port 8501
streamlit run itinerary_planner/streamlit_ui.py

Flight Search Logs with Task ID initiated from Host Agents

Hotel Search Logs with Tasks initiated from Host Agents

Itinerary Planner — Host agent with all Request /Response

Agent Event Logs

This demo implements the core principles of the Google A2A Protocol enabling agents to communicate in a structured, interoperable way. Components that were full implemented in above demo.

  1. Agent Cards — All agents expose .well-known/agent.json files for discovery.
  2. A2A Servers — Each agent runs as an A2A server: flight_search_app , hotel_search_app and itinerary_planner
  3. A2A Clients — The itinerary_planner contains dedicated A2A clients for flight and hotel agents.
  4. Task Management — Each request/response is modeled as an A2A Task with states like submitted, working, and completed.
  5. Message Structure — Uses the standard JSON-RPC format with role (user/agent) and parts (primarily TextPart).

Below components were not implemented in our demo but can be extended for enterprise grade agents

  1. Streaming (SSE) -A2A supports Server-Sent Events for long-running tasks, but our demo uses simple request/response which takes less than 3–5 seconds.
  2. Push Notifications — Web-hook update mechanism is not used yet.
  3. Complex Parts — Only TextPart is used. Support for DataPart, FilePart, etc. can be added for richer payloads.
  4. Advanced Discovery — Basic .well-known/agent.json is implemented, but no advanced authentication, JWKS, or authorization flows.

A2A Protocol — Official Documentation

Conclusion

In this article, we explored how to build a fully functional multi-agent system using reusable A2A components, ADK, LangChain, and MCP in a travel planning scenario. By combining these open-source tools and frameworks, our agents were able to:

  • Discover and invoke each other dynamically using A2A
  • Connect to external APIs in a model-friendly way via MCP
  • Leverage modern frameworks like ADK and LangChain
  • Communicate asynchronously with clear task lifecycles and structured results

The same principles can be extended to more domains like retail , customer service automation, operations workflows, and AI-assisted enterprise tooling.

--

--

AI Cloud Lab
AI Cloud Lab

Published in AI Cloud Lab

Exploring AI, Cloud, LLM Frameworks , MLOps and Open Source Tools into hands-on and easy-to-follow tutorials.

Arjun Prabhulal
Arjun Prabhulal

Written by Arjun Prabhulal

Explore AI/ML and Open Source tools and breakdown into simple, practical guides so that anyone can follow

Responses (16)