AutoGen
AutoGen is a powerful open-source framework developed by Microsoft for creating multi-agent AI applications that work together to accomplish tasks, often by interacting through a group chat. A conversable agent can send, receive, and generate messages and can be customized using AI models, tools, and human input.
A conversable agent can be any of:
-
A user proxy agent serves as an intermediary for humans, operating between user inputs and agent responses. This agent is capable of executing code, enhancing the interaction process.
-
One or more assistant agents that, as expected, are AI assistants utilizing Large Language Models (LLMs) without the need for human input or code execution.
The example below shows how to create a group chat between a user proxy agent (e.g., a Head of Architecture) and three assistant agents: a Cloud Architect, an open-source (OSS) Architect, and a Lead Architect. The objective would be to provide a solution architecture based on a list of business requirements.

The following is a sample of how conversational flow works:
- Business requirements are provided to the proxy agent.
- The proxy agent initiates a chat between the architects.
- The Cloud Architect will speak first, providing a proposal for each major cloud provider: Azure, AWS, and GCP.
Next speaker:
-
The OSS Architect will offer a solution outside of the cloud realm using OSS frameworks.
Next (and final) speaker:
-
The Lead Architect will review all solutions and provide a final proposal.
Multi-agentic conversation is supported only with GPT-4.1, GPT-4o, GPT-4o mini, o3, o3-mini and o1 models. Single-agentic conversation is supported with GPT-4.1, GPT-4o, GPT-4o mini, o3, o3-mini, o1, Jais 30B, Llama 3.3 70B, Llama 3 70B, Mistral 7B, and Mixtral 8x7B models.
Set Up the Environment
To set the environment, install AutoGen:
1 pip install -U "autogen-agentchat" "autogen-ext[openai]"
2 pip install pyautogen
Create a Multi-Agentic Conversation
Create the prompts starting with the common piece (the task at hand) that contains some simple requirements:
1 task = '''
2 **Task**: As an architect, you are required to design a solution for the
3 following business requirements:
4 - Data storage for massive amounts of IoT data
5 - Real-time data analytics and machine learning pipeline
6 - Scalability
7 - Cost Optimization
8 - Region pairs in Europe, for disaster recovery
9 - Tools for monitoring and observability
10 - Timeline: 6 months
11
12 Break down the problem using a Chain-of-Thought approach. Ensure that your
13 solution architecture is following best practices.
14 '''
Prompt for the Cloud Architect
1 cloud_prompt = '''
2 **Role**: You are an expert cloud architect. You need to develop architecture proposals
3 using either cloud-specific PaaS services, or cloud-agnostic ones.
4 The final proposal should consider all 3 main cloud providers: Azure, AWS and GCP, and provide
5 a data architecture for each. At the end, briefly state the advantages of cloud over on-premises
6 architectures, and summarize your solutions for each cloud provider using a table for clarity.
7 '''
8 cloud_prompt += task
For the OSS Architect
1 oss_prompt = '''
2 **Role**: You are an expert on-premises, open-source software architect. You need
3 to develop architecture proposals without considering cloud solutions.
4 Only use open-source frameworks that are popular and have lots of active contributors.
5 At the end, briefly state the advantages of open-source adoption, and summarize your
6 solutions using a table for clarity.
7 '''
8 oss_prompt += task
And the Lead Architect
1 lead_prompt = '''
2 **Role**: You are a lead Architect tasked with managing a conversation between
3 the cloud and the open-source Architects.
4 Each Architect will perform a task and respond with their results. You will critically
5 review those and also ask for, or point to, the disadvantages of their solutions.
6 You will review each result, and choose the best solution in accordance with the business
7 requirements and architecture best practices. You will use any number of summary tables to
8 communicate your decision.
9 '''
10 lead_prompt += task
Now, create Compass conversable agents, and have them interact in a chat setting.
-
Configure the LLM Compass model:
1 import os
2 llm_config = {
3 "config_list": [{
4 "model": "gpt-4o",
5 "api_key": os.environ["CUSTOM_LLM_API_KEY"],
6 "base_url": "https://api.core42.ai/v1"
7 }],
8} - Create agents using the custom LLM configuration:
1 import autogen
2 from autogen import UserProxyAgent
3 from autogen import AssistantAgent
4
5 user_proxy = UserProxyAgent(
6 name="supervisor",
7 system_message = "Head of Architecture",
8 code_execution_config={
9 "use_docker": False,
10 },
11 human_input_mode="NEVER",
12 )
13 cloud_agent = AssistantAgent(
14 name = "cloud",
15 system_message = cloud_prompt,
16 llm_config=llm_config
17 )
18
19 oss_agent = AssistantAgent(
20 name = "oss",
21 system_message = oss_prompt,
22 llm_config=llm_config
23 )
24
25 lead_agent = AssistantAgent(
26 name = "lead",
27 system_message = lead_prompt,
28 llm_config=llm_config
29 )To make sure that this order is followed, create a state transition function to be used in the chat for speaker selection:
1 def state_transition(last_speaker, groupchat):
2 messages = groupchat.messages
3
4 if last_speaker is user_proxy:
5 return cloud_agent
6 elif last_speaker is cloud_agent:
7 return oss_agent
8 elif last_speaker is oss_agent: 9 return lead_agent
10 elif last_speaker is lead_agent:
11 # lead -> end
12 return None
Now, trigger the chat:
1 groupchat = autogen.GroupChat(
2 agents=[user_proxy, cloud_agent, oss_agent, lead_agent],
3 messages=[],
4 max_round=6,
5 speaker_selection_method=state_transition,
6 )
7 manager = autogen.GroupChatManager(groupchat=groupchat, llm_config=llm_config)
8
9 user_proxy.initiate_chat(
10 manager, message="Provide your best architecture based on these business requirements."
11 )
Response