Agent Orchestration: The basics with Percolate
I wanted to discuss a few things about testing agent orchestration (see also the notes). I want to discuss general ideas as they relate to Percolate.
Percolate is a data-driven agentic orchestration framework that emphasizes agentic memory. You can run it in Docker or through a managed cloud instance. This paves the way for a serverless, cloud-native agent orchestrator where you can focus on data and persistence rather than code.
The main themes in this article are on the use of “data over code” for dynamically building agentic workflows memory using tools and agents stored in a database. This article will just lay out the basic Agent setup and we will dive into other patterns in later articles.
Starting with the basics, as an example “agent”, there is a Task object built into Percolate but this could be any object subclassing Pydantic BaseModel. The Task looks like below — note the external function points to the percolate-api in this case but could be any MCP Server or OpenAPI-based API registered in the Percolate database.
class Task(Project):
"""Tasks are sub projects. A project can describe a larger objective and be broken down into tasks.
If if you need to do research or you are asked to search or create a research iteration/plan for world knowledge searches you MUST ask the _ResearchIteration agent_ to help perform the research on your behalf (you can request this agent with the help i.e. ask for an agent by name).
You should be clear when relaying the users request. If the user asks to construct a plan, you should ask the agent to construct a plan. If the user asks to search, you should ask the research agent to execute a web search.
If the user asks to look for existing plans, you should ask the research agent to search research plans.
"""
id: typing.Optional[uuid.UUID| str] = Field(None,description= 'id generated for the name and project - these must be unique or they are overwritten')
project_name: typing.Optional[str] = Field(None, description="The related project name if relevant")
#other fields inherited from project such as name and description
@classmethod
def get_model_functions(cls):
"""fetch task external functions"""
return {'post_tasks_': 'Used to save tasks by posting the task object. Its good practice to first search for a task of a similar name before saving in case of duplicates' }We can register types in Percolate which allows them to be used from the database directly and not just in Python code.
import percolate as p8
p8.repository(Task).register()This creates tables and embedding tables and other support stuff in Percolate. You can register any Pydantic object in this way. Creating a structured type in the database is useful for agents to save their work e.g in this case, create tasks for the user.
Using the agent is easy
task_agent = p8.Agent(Task)
task_agent("Create a task to [DETAIL HERE]",
#language_model='claude-3-7-sonnet-20250219',
# limit=10
)When we construct the Task the agent runner loads system functions like help and activate-functions-by-name and the external reference post-tasks, which is an API POST to the percolate-api under the hood. You can see these functions loaded in the debug of the logs when the Agent is constructed with p8.Agent(Task)
When we run the above request to create a task, it will construct and save the task in the database via this post method and report back. It can construct the task using the structured fields as defined by the Pydantic object itself.
In Percolate agents/entities are always registered as tables
This is a simple illustration of using system functions like “help” or functions linked explicitly to the Task entity like “post-task”. What is more interesting is to discover functions, which is the first step to building dynamic agentic workflows/graphs.
You will note above that a help function and activate_function_by_name function are added by default on all agents. This applies when running agents in Python or in the Postgres database. So we can ask the Task agent to do anything supported by any functions in the database.
To illustrate, here I used the open swagger “pets api” that is registered in the getting-started examples for Percolate. The purpose here is to emphasize that any agent (such as Task) can do anything supported by any function in the database even though constraining it to use “its own functions” is normally what you would want to do.
task_agent("Can you find pets by sold status")Some of the output is shown below.
Ignoring the blue debug, the agent asks for help because it does not know anything about Pets and loads a function which it then calls. This is a REST call to the standard test swagger API — which is registered in the Percolate database and therefore searchable and loadable.
While the Python orchestrator can be used to run agents as above, Percolate is designed to allow agents to be evaluated directly from the database. This is because we are building a framework for agentic memory that works independently of code. Below for example, because we have already registered the Task agent above, we can run the same request directly in the database.
--secon parameter is an iteration limit
-- provide enough to help, activate, call and summarize for example
select * from run('Can you find pets by sold status',
5,
'claude-3-7-sonnet-20250219',
'p8.Task')This will use the SQL functions to search and discover the tool, call it and use the agent to summarize the result. I provide a getting started video here if you want to learn more.
While most agent frameworks focus on building agents and graphs in Python, at scale we believe treating agents and graphs as “data” will scale better and support no-code agent use cases. Agents and tools can be searched and built into dynamic graphs.
Note we can still add constraints by attaching specific tools to agents and not lean on the dynamic function loading. But at scale, it may be desirable (at times) to simply register a whole lot of functions and a whole lot of agents in the database and build orchestrators that can self-organize agents and tools into dynamic workflows.
Thats what Percolate does. If you are interested in learning more or getting involved, please get in touch!
Notes
We emphasize “data over code” in Percolate. “Agents” (i.e. the object that combines prompt + structure + tool-references) are declarative and can reference external tools. The orchestrator will execute prompts and call external tools.
- “Agents” are declarative in Percolate. The system prompt and structured output are just schema dumped in a database and the agent can use external functions. In Percolate external functions use either the MCP or OpenAPI protocol. Agents are “no code” in the sense that we expect tools to already exist. In the managed cloud instance its also possible to push a docker container for a custom API where no external MCP Server or OpenAPI API already exists.
- Orchestrators run workflows that are often expressed as a graph. A graph is data. But we also focus more on discovery of a dynamic plan rather than hard wired graphs. In future you would expect an AI to be better than us at deciding how to build a graph. For now we simply register all the functions in the database and we assume that they are searchable. We allow agents to activate functions by name making agents and workflows dynamic.
- Because we are focusing on persistence and state, every agent is by default also a table. The structured output can be used to build CRUD use cases. Above we saw that a Task agent can save its own state to the database and in Percolate you get this persistence for free.

