Working with LLM Powered Agents — Key Takeaways

Bojan Ciric
The Future of Data
Published in
3 min readMay 1, 2024

In the past three months, I have embarked on a journey to create multiple use cases leveraging autonomous LLM-powered agents, built on an Autogen framework. The experience has been insightful, and I’ve distilled several key takeaways that might be beneficial for others looking to explore this innovative field.

Initial Impressions

The idea of using LLM powered agents is incredibly powerful. These agents have the potential to tackle complex challenges by leveraging the advanced capabilities of large language models. Their ability to process and analyze vast amounts of information makes them invaluable for solving intricate problems that are otherwise overwhelming for traditional methods.

The Dual Role of the Agent Designer/Developer

Embarking on this project required me to wear two hats: that of a manager and a technologist. As a manager, my role involved orchestrating a team of agents, similar to leading a team of people, where I delegated tasks and managed workflows to address complex issues effectively. On the technology side, I needed a deep understanding of how to fine-tune each agent’s role and responsibilities to ensure peak performance and precision.

Developing Tools for Enhanced Efficiency

One of the critical factors in successfully utilizing LLM powered agents is the development of specific tools that aid in their tasks. These tools are designed to ensure that the agents work efficiently and produce consistent results. By tailoring these tools to the agents’ needs, we can streamline their processes and enhance the quality of their outputs, leading to more reliable and effective solutions.

Streamlining Workflows

Optimizing workflows is essential in eliminating unnecessary steps and speeding up the execution process. By integrating agent-specific system messages with carefully designed workflow mechanisms, I was able to fine-tune how tasks were assigned and executed. This approach helped in minimizing delays and maximizing productivity, making the system more dynamic and responsive.

Choosing the Right Model

Not all tasks require the highest level of cognitive capabilities provided by the most advanced models. Part of my learning was to select the most appropriate model for each task, balancing cost and performance effectively. This strategy not only optimized our resource usage but also enhanced the overall system efficiency by aligning the agents’ capabilities with the task requirements.

Example

Below is the execution log of a simple solution that enables a conversational BI experience — users can ask questions in plain English, and agents will handle the rest: they translate the questions into database queries, connect to the database, execute the queries, and return the results to the user.

The request: Show the top 5 sales representatives based on total sales

1. Schema Agent will request database schema
2. User Proxy Agent will execute request for database schema
3. Query Agent will use schema information to convert user request into valid SQL query; 4. User Proxy Agent will execute the query and return results to Query Agent; 5. Query Agent will send a final response to the user

Final Thoughts

These past months have been a deep exploration into what’s possible with LLM powered agents. The experience has shown that these tools have the potential to revolutionize how we approach complex problems across various fields. As we continue to develop and refine these agents and their ecosystems, the future looks promising for further innovations and enhancements in LLM technology.

What are your thoughts? Comments are welcome and appreciated.

Disclaimer: The views and opinions expressed in this article are those of the author and do not necessarily reflect the opinions or positions of any entities author represents.

--

--

Bojan Ciric
The Future of Data

Technology Fellow at Deloitte | Data Thinker | Generative AI Hands-on | Converts data into actionable insignts