AutoGen
AutoGen is a powerful open-source framework developed by Microsoft for creating multi-agent AI applications that work together to accomplish tasks, often by interacting through a group chat. A conversable agent can send, receive, and generate messages and can be customized using AI models, tools, and human input.
A conversable agent can be any of:
-
A user proxy agent serves as an intermediary for humans, operating between user inputs and agent responses. This agent is capable of executing code, enhancing the interaction process.
-
One or more assistant agents that, as expected, are AI assistants utilizing Large Language Models (LLMs) without the need for human input or code execution.
The example below shows how to create a group chat between a user proxy agent (e.g., a Head of Architecture) and three assistant agents: a Cloud Architect, an open-source (OSS) Architect, and a Lead Architect. The objective would be to provide a solution architecture based on a list of business requirements.

The following is a sample of how conversational flow works:
- Business requirements are provided to the proxy agent.
- The proxy agent initiates a chat between the architects.
- The Cloud Architect will speak first, providing a proposal for each major cloud provider: Azure, AWS, and GCP.
Next speaker:
-
The OSS Architect will offer a solution outside of the cloud realm using OSS frameworks.
Next (and final) speaker:
-
The Lead Architect will review all solutions and provide a final proposal.
Multi-agentic conversation is supported only with GPT-4.1, GPT-4o, GPT-4o mini, o3, o3-mini and o1 models. Single-agentic conversation is supported with GPT-4.1, GPT-4o, GPT-4o mini, o3, o3-mini, o1, Jais 30B, Llama 3.3 70B, Llama 3 70B, Mistral 7B, and Mixtral 8x7B models.
Set Up the Environment
To set the environment, install AutoGen:
pip install -U "autogen-agentchat" "autogen-ext[openai]"
pip install pyautogen
Create a Multi-Agentic Conversation
Create the prompts starting with the common piece (the task at hand) that contains some simple requirements:
task = '''
**Task**: As an architect, you are required to design a solution for the
following business requirements:
- Data storage for massive amounts of IoT data - Real-time data analytics and machine learning pipeline
- Scalability
- Cost Optimization
- Region pairs in Europe, for disaster recovery
- Tools for monitoring and observability
- Timeline: 6 months
Break down the problem using a Chain-of-Thought approach. Ensure that your
solution architecture is following best practices.
'''
Prompt for the Cloud Architect
cloud_prompt = '''
**Role**: You are an expert cloud architect. You need to develop architecture proposals
using either cloud-specific PaaS services, or cloud-agnostic ones.
The final proposal should consider all 3 main cloud providers: Azure, AWS and GCP, and provide
a data architecture for each. At the end, briefly state the advantages of cloud over on-premises
architectures, and summarize your solutions for each cloud provider using a table for clarity.
'''
cloud_prompt += task
For the OSS Architect
oss_prompt = '''
**Role**: You are an expert on-premises, open-source software architect. You need
to develop architecture proposals without considering cloud solutions.
Only use open-source frameworks that are popular and have lots of active contributors.
At the end, briefly state the advantages of open-source adoption, and summarize your
solutions using a table for clarity.
'''
oss_prompt += task
And the Lead Architect
lead_prompt = '''
**Role**: You are a lead Architect tasked with managing a conversation between
the cloud and the open-source Architects.
Each Architect will perform a task and respond with their results. You will critically
review those and also ask for, or point to, the disadvantages of their solutions.
You will review each result, and choose the best solution in accordance with the business
requirements and architecture best practices. You will use any number of summary tables to
communicate your decision.
'''
lead_prompt += task
Now, create Compass conversable agents, and have them interact in a chat setting.
-
Configure the LLM Compass model:
import os
llm_config = {
"config_list": [{
"model": "gpt-4o",
"api_key": os.environ["CUSTOM_LLM_API_KEY"],
"base_url": "https://api.core42.ai/v1"
}],
} - Create agents using the custom LLM configuration:
import autogen
from autogen import UserProxyAgent
from autogen import AssistantAgent
user_proxy = UserProxyAgent(
name="supervisor",
system_message = "Head of Architecture",
code_execution_config={
"use_docker": False,
},
human_input_mode="NEVER",
)
cloud_agent = AssistantAgent(
name = "cloud",
system_message = cloud_prompt,
llm_config=llm_config
)
oss_agent = AssistantAgent(
name = "oss",
system_message = oss_prompt,
llm_config=llm_config
)
lead_agent = AssistantAgent(
name = "lead",
system_message = lead_prompt,
llm_config=llm_config
)To make sure that this order is followed, create a state transition function to be used in the chat for speaker selection:
def state_transition(last_speaker, groupchat):
messages = groupchat.messages
if last_speaker is user_proxy:
return cloud_agent
elif last_speaker is cloud_agent:
return oss_agent
elif last_speaker is oss_agent: return lead_agent
elif last_speaker is lead_agent:
# lead -> end
return None
Now, trigger the chat:
groupchat = autogen.GroupChat(
agents=[user_proxy, cloud_agent, oss_agent, lead_agent],
messages=[],
max_round=6,
speaker_selection_method=state_transition,
)
manager = autogen.GroupChatManager(groupchat=groupchat, llm_config=llm_config)
user_proxy.initiate_chat(
manager, message="Provide your best architecture based on these business requirements."
)
Response: