How To Use AutoGen With ANY Open-Source LLM for FREE

ai-for-devs.com
4 min readFeb 2, 2024

--

Autonomous agents have revolutionized tasks typically handled by humans, such as responding to chats and emails or doing research.

However, daily costs for these agents can be exorbitant for developers. This guide shows how to run AutoGen agents for free, avoiding pitfalls and enhancing performance. Here’s what we’ll build:

An overview of our AutoGen group chat setup where the Assistant Agent ‘Bob’ is programmed to tell jokes while ‘Alice’ critiques them, all orchestrated by a central ‘Manager’ and mediated through the ‘UserProxyAgent’.

Setting Up the Environment

To begin, create a virtual environment to isolate dependencies. Activate this environment and install autogen using the command pip install pyautogen. This initial step ensures a clean, controlled setup for deploying autogen agents.

Configuring AutoGen and Agents

Configuration is pivotal. You can start with a basic model like GPT-3.5 until everything works, but then, for a free alternative, consider open-source models like Llama 2. Here’s how to configure your agents:

import autogen

llm_config = {"config_list": [{
"model": "gpt-3.5-turbo"
}]}

llm_config_local = {"config_list": [
{"model": "llama2", "base_url": "http://localhost:1234/v1"}]
}

Creating Assistant Agents

Define your assistant agents, Bob and Alice, with distinct roles within the chat:

bob = autogen.AssistantAgent(
name="Bob",
system_message="""
You love telling jokes. After Alice feedback improve
the joke. Say 'TERMINATE' when you have improved the joke.
""",
llm_config=llm_config_local
)

alice = autogen.AssistantAgent(
name="Alice",
system_message="Criticise the joke.",
llm_config=llm_config_local
)

Bob tells jokes, and Alice critiques them, adding dynamics to the interaction.

Setting Up the User Proxy

The user proxy serves as an intermediary between the developer and the AutoGen agents, ensuring a seamless interaction. It is programmed to automatically terminate the chat whenever the word “TERMINATE” is detected in any message.

To facilitate a fully autonomous operation, the human input mode is configured to “NEVER,” allowing the system to function independently without manual intervention.

From AutoGen version 0.2.8, you need to set code_execution_config to ‘false’ to disable Docker since it’s now enabled by default.

def termination_message(msg):
return "TERMINATE" in str(msg.get("content", ""))

user_proxy = autogen.UserProxyAgent(
name="user_proxy",
code_execution_config={"use_docker": False},
is_termination_msg=termination_message,
human_input_mode="NEVER"
)

Managing the Group Chat

Now we can create a group chat and assign the agents. The manager oversees the conversation flow, ensuring seamless interactions.

groupchat = autogen.GroupChat(
agents=[bob, alice, user_proxy],
messages=[]
)

manager = autogen.GroupChatManager(
groupchat=groupchat,
code_execution_config={"use_docker": False},
llm_config=llm_config,
is_termination_msg=termination_message
)

Integrating Local Language Models with LM Studio

For running local LLMs we can install LM Studio, a tool that simplifies the management and deployment of these models. Here’s how you can set it up:

  1. Begin by downloading LM Studio compatible with your operating system — be it Mac, Windows, or Linux — from the official website.
  2. Follow the straightforward installation guide, which typically involves running the installer and following the prompts until completion. This process should be quick and hassle-free.
  3. Once LM Studio is set up, open the application to access a user-friendly interface for browsing and downloading language models. Select and download a model such as Llama 2, which is a popular choice for local deployment.
  4. After downloading your chosen language model, LM Studio provides API endpoints. These endpoints are designed to be compatible with the syntax used by the OpenAI chat completion API, making it easy to integrate the local language model with AutoGen.

Starting the Chat

Initiate the chat with a simple command, focusing on a specific task, like telling a joke:

user_proxy.initiate_chat(
manager,
message="Tell a joke"
)

You can now run the script with:

python app.py

Troubleshooting and Optimization

If you encounter errors, such as AutoGen failing to identify the next agent, consider implementing a round-robin selection process to ensure smooth transitions between agents. Additionally, local models may require more precise prompts to function correctly, so adjust the system messages accordingly.

groupchat = autogen.GroupChat(
agents=[bob, alice, user_proxy],
speaker_selection_method="round_robin"
messages=[]
)

Conclusion

By following these steps, developers can set up and run AutoGen agents for free, leveraging both cloud-based and local language models. This guide aims to reduce the financial barriers to using AutoGen, making it accessible for developers to explore and create innovative applications.

Complete Code:

import autogen

llm_config = {"config_list": [{
"model": "gpt-3.5-turbo"
}]}

llm_config_local = {"config_list": [{
"model": "llama2",
"base_url": "http://localhost:1234/v1"
}]}

bob = autogen.AssistantAgent(
name="Bob",
system_message=""""
You love telling jokes. After Alice feedback improve the joke.
Say 'TERMINATE' when you have improved the joke.
""",
llm_config=llm_config_local
)

alice = autogen.AssistantAgent(
name="Alice",
system_message="Criticise the joke.",
llm_config=llm_config_local
)

def termination_message(msg):
return "TERMINATE" in str(msg.get("content", ""))

user_proxy = autogen.UserProxyAgent(
name="user_proxy",
code_execution_config={"use_docker": False},
is_termination_msg=termination_message,
human_input_mode="NEVER"
)

groupchat = autogen.GroupChat(
agents=[bob, alice, user_proxy],
speaker_selection_method="round_robin"
messages=[]
)

manager = autogen.GroupChatManager(
groupchat=groupchat,
code_execution_config={"use_docker": False},
llm_config=llm_config,
is_termination_msg=termination_message
)

user_proxy.initiate_chat(
manager,
message="Tell a joke"
)

--

--

ai-for-devs.com

I’m Sebastian, a developer with a rich background in informatics and business.