Playing with AutoGen: Exploring the Conversational Frontier
In the dynamic field of technology, conversational AI stands as a captivating frontier, emulating natural human dialogue. Python, with its robust library ecosystem, plays a crucial role in this evolution. Autogen, a powerful tool in this space, simplifies the complex task of building AI-driven conversational agents. This article explores the practical use of the autogen library to create an interactive AI agent using NextAI’s API keys and models. Whether you’re an AI enthusiast, a developer integrating conversational AI, or simply curious about digital conversations, this exploration provides insightful perspectives and practical guidance.
What is AutoGen?
AutoGen is a groundbreaking project by Microsoft that enables users to create as many autonomous chatgpt-like agents as they desire. These agents can work in tandem to achieve specific tasks. The framework is incredibly flexible, allowing users to define various agents, assign them roles, and orchestrate their collaborative efforts.
Setting up NextAI API Key and Model
- Login to NextAI: Visit the NextAI website and log in to your account using your credentials.
- Access API Section: On the homepage, locate and click on the “Use API” button to access the API section.
- Choose Model: You’ll find various models available in the Model settings section. Choose the specific model you intend to work with based on your requirements.
- Generate API Key: Look for an option labeled “Generate API Key” specific to the chosen model. Click on the “Generate API Key” button.
- Create API Key: Create a new API Key by giving it a name and clicking the “Create” button.
- Copy API Key: Once generated, the API key will be displayed. Copy this API key as it will be necessary for authentication when using the NextAI model in your applications or scripts.
If you want to access premium models you can follow this link to set up the API Key and Model.
Step 1: Importing Necessary Libraries
pip install pyautogen
from autogen import OpenAIWrapper
from autogen import AssistantAgent, UserProxyAgent
import autogen
These lines import the autogen
library after ensuring its installation via pip
. OpenAIWrapper
, AssistantAgent
, and UserProxyAgent
are components of the autogen
library being used.
Step 2: Initializing the NextAI Model
config_list = [
{
"model": "zephyr-7b-beta", #NextAI Model name
"base_url": "Your_NEXT_API_Endpoint",
"api_key": "Your_NEXT_API_Key"
}
]
This section configures a model from NextAI, specifying the model name (zephyr-7b-alpha
), the API endpoint (base_url
), and the API key. The actual API key is replaced with "sk-your-key"
it for security purposes.
For Simple assistant and userProxy agent —
Step 1: Creating the Conversational Agent
assistant = ConversableAgent("agent", llm_config={"config_list": config_list})
Here, an instance of ConversableAgent
is created, representing the AI assistant. It is configured with the previously defined model settings.
Step 2: Setting Up a User Proxy Agent
user_proxy = UserProxyAgent("user", code_execution_config=False)
This line creates a UserProxyAgent
, which acts as a proxy for the user in the conversation. It's configured not to execute code (code_execution_config=False
).
Step 3: Initiating the Conversation
assistant.initiate_chat(user_proxy, message="tell me top 10 jokes?")
The conversation starts with the assistant sending an initial message to the user proxy agent. Here, it’s a request for the top 10 jokes.
For writing a Python script and executing it automatically-
Step 1: Creating the AI Assistant Agent
assistant = autogen.AssistantAgent(
name="assistant",
llm_config={
"cache_seed": 42,
"config_list": config_list,
"temperature": 0.2,
},
)
In this segment, an AssistantAgent
is initialized. This agent represents the AI side of the conversation. The configuration includes:
cache_seed
: Ensures reproducibility of responses.config_list
: A list of configurations for the OpenAI API, including the model and other settings.temperature
: Controls the randomness of the AI's responses. A lower temperature (0.2) means more predictable responses.
Step 2: Setting Up the User Proxy Agent
user_proxy = autogen.UserProxyAgent(
name="user_proxy",
human_input_mode="NEVER",
max_consecutive_auto_reply=10,
is_termination_msg=lambda x: x.get("content", "").rstrip().endswith("TERMINATE"),
code_execution_config={
"work_dir": "coding",
"use_docker": False,
},
)
This code creates a UserProxyAgent
. Key configurations are:
human_input_mode
: Set to "NEVER" to indicate that this agent will not receive human input.max_consecutive_auto_reply
: Limits the number of consecutive replies by the agent.is_termination_msg
: A lambda function to determine when the conversation should end, here based on a specific termination phrase.code_execution_config
: Settings for code execution, including a working directory and whether to use Docker.
Step 3: Initiating the Conversation
user_proxy.initiate_chat(
assistant,
message="""write a bubble sort code.""",
)
The user proxy initiates the conversation with a request for bubble sort code, demonstrating how the AI can handle code-related queries.
For the system to generate a plot using live data from the web-
Step 1: Setting Up the AI Assistant Agent
assistant = AssistantAgent("assistant", llm_config={"config_list": config_list})
An instance of AssistantAgent
is created and named "assistant". The configuration for this agent is provided through the llm_config
parameter, which includes config_list
. This list contains the settings for the model used by the agent, such as the model name, API endpoint, and API key.
Step 2: Creating the User Proxy Agent
user_proxy = UserProxyAgent("user_proxy", code_execution_config={"work_dir": "coding"})
A UserProxyAgent
is instantiated. This agent acts as a stand-in for a human user in the conversation. The code_execution_config
parameter is used to specify configurations related to code execution, with "work_dir" set to "coding", indicating the working directory for code-related tasks.
Step 3: Initiating the Conversation
user_proxy.initiate_chat(assistant, message="Plot a chart of NVDA and TESLA stock price change YTD.")
The conversation starts with the user proxy sending a message to the assistant. The message is a command to plot a chart showing the year-to-date (YTD) price change of NVIDIA (NVDA) and Tesla (TESLA) stocks.
You can access and run the following code on this Google Colab Notebook.
Final Notes:
As we wrap up this exploration of building a conversational AI agent with the autogen library, the potential in this domain is vast and compelling. The seamless integration of NextAI’s models into the autogen framework reflects the accessibility of technology for developers and innovators. The future of conversational AI is promising, limited only by our imagination and exploration. Whether you’re developing AI applications, enhancing user experiences, or satisfying your curiosity, the autogen library is a robust ally. Embark on your journey, experiment with these tools, and contribute to shaping the future of conversational AI.